00:00:00.001 Started by upstream project "autotest-per-patch" build number 132320 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.027 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.027 The recommended git tool is: git 00:00:00.027 using credential 00000000-0000-0000-0000-000000000002 00:00:00.029 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.048 Fetching changes from the remote Git repository 00:00:00.050 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.067 Using shallow fetch with depth 1 00:00:00.067 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.067 > git --version # timeout=10 00:00:00.081 > git --version # 'git version 2.39.2' 00:00:00.082 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.101 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.101 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.581 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.597 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.612 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.612 > git config core.sparsecheckout # timeout=10 00:00:03.626 > git read-tree -mu HEAD # timeout=10 00:00:03.644 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.671 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.672 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.788 [Pipeline] Start of Pipeline 00:00:03.799 [Pipeline] library 00:00:03.800 Loading library shm_lib@master 00:00:03.801 Library shm_lib@master is cached. Copying from home. 00:00:03.815 [Pipeline] node 00:00:03.833 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:03.835 [Pipeline] { 00:00:03.842 [Pipeline] catchError 00:00:03.843 [Pipeline] { 00:00:03.853 [Pipeline] wrap 00:00:03.862 [Pipeline] { 00:00:03.868 [Pipeline] stage 00:00:03.870 [Pipeline] { (Prologue) 00:00:03.886 [Pipeline] echo 00:00:03.888 Node: VM-host-SM17 00:00:03.893 [Pipeline] cleanWs 00:00:03.902 [WS-CLEANUP] Deleting project workspace... 00:00:03.902 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.907 [WS-CLEANUP] done 00:00:04.100 [Pipeline] setCustomBuildProperty 00:00:04.171 [Pipeline] httpRequest 00:00:04.493 [Pipeline] echo 00:00:04.495 Sorcerer 10.211.164.20 is alive 00:00:04.503 [Pipeline] retry 00:00:04.506 [Pipeline] { 00:00:04.517 [Pipeline] httpRequest 00:00:04.521 HttpMethod: GET 00:00:04.522 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.522 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.544 Response Code: HTTP/1.1 200 OK 00:00:04.545 Success: Status code 200 is in the accepted range: 200,404 00:00:04.546 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:34.716 [Pipeline] } 00:00:34.727 [Pipeline] // retry 00:00:34.733 [Pipeline] sh 00:00:35.013 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:35.028 [Pipeline] httpRequest 00:00:35.722 [Pipeline] echo 00:00:35.724 Sorcerer 10.211.164.20 is alive 00:00:35.732 [Pipeline] retry 00:00:35.734 [Pipeline] { 00:00:35.748 [Pipeline] httpRequest 00:00:35.752 HttpMethod: GET 00:00:35.753 URL: http://10.211.164.20/packages/spdk_fc96810c2908a9f503c6238994746205c3fdd19e.tar.gz 00:00:35.754 Sending request to url: http://10.211.164.20/packages/spdk_fc96810c2908a9f503c6238994746205c3fdd19e.tar.gz 00:00:35.758 Response Code: HTTP/1.1 200 OK 00:00:35.759 Success: Status code 200 is in the accepted range: 200,404 00:00:35.760 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_fc96810c2908a9f503c6238994746205c3fdd19e.tar.gz 00:02:29.828 [Pipeline] } 00:02:29.848 [Pipeline] // retry 00:02:29.856 [Pipeline] sh 00:02:30.136 + tar --no-same-owner -xf spdk_fc96810c2908a9f503c6238994746205c3fdd19e.tar.gz 00:02:33.438 [Pipeline] sh 00:02:33.719 + git -C spdk log --oneline -n5 00:02:33.719 fc96810c2 bdev: remove bdev from examine allow list on unregister 00:02:33.719 a0c128549 bdev/nvme: Make bdev nvme get and set opts APIs public 00:02:33.719 53ca6a885 bdev/nvme: Rearrange fields in spdk_bdev_nvme_opts to reduce holes. 00:02:33.719 03b7aa9c7 bdev/nvme: Move the spdk_bdev_nvme_opts and spdk_bdev_timeout_action struct to the public header. 00:02:33.719 d47eb51c9 bdev: fix a race between reset start and complete 00:02:33.737 [Pipeline] writeFile 00:02:33.752 [Pipeline] sh 00:02:34.034 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:34.046 [Pipeline] sh 00:02:34.326 + cat autorun-spdk.conf 00:02:34.326 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:34.326 SPDK_TEST_NVMF=1 00:02:34.326 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:34.326 SPDK_TEST_URING=1 00:02:34.326 SPDK_TEST_USDT=1 00:02:34.326 SPDK_RUN_UBSAN=1 00:02:34.326 NET_TYPE=virt 00:02:34.326 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:34.333 RUN_NIGHTLY=0 00:02:34.335 [Pipeline] } 00:02:34.348 [Pipeline] // stage 00:02:34.363 [Pipeline] stage 00:02:34.364 [Pipeline] { (Run VM) 00:02:34.377 [Pipeline] sh 00:02:34.740 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:34.740 + echo 'Start stage prepare_nvme.sh' 00:02:34.740 Start stage prepare_nvme.sh 00:02:34.740 + [[ -n 3 ]] 00:02:34.740 + disk_prefix=ex3 00:02:34.740 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:02:34.740 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:02:34.740 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:02:34.740 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:34.740 ++ SPDK_TEST_NVMF=1 00:02:34.740 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:34.740 ++ SPDK_TEST_URING=1 00:02:34.740 ++ SPDK_TEST_USDT=1 00:02:34.740 ++ SPDK_RUN_UBSAN=1 00:02:34.740 ++ NET_TYPE=virt 00:02:34.740 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:34.740 ++ RUN_NIGHTLY=0 00:02:34.740 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:34.740 + nvme_files=() 00:02:34.740 + declare -A nvme_files 00:02:34.740 + backend_dir=/var/lib/libvirt/images/backends 00:02:34.740 + nvme_files['nvme.img']=5G 00:02:34.740 + nvme_files['nvme-cmb.img']=5G 00:02:34.740 + nvme_files['nvme-multi0.img']=4G 00:02:34.740 + nvme_files['nvme-multi1.img']=4G 00:02:34.740 + nvme_files['nvme-multi2.img']=4G 00:02:34.740 + nvme_files['nvme-openstack.img']=8G 00:02:34.740 + nvme_files['nvme-zns.img']=5G 00:02:34.740 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:34.740 + (( SPDK_TEST_FTL == 1 )) 00:02:34.740 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:34.740 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:34.740 + for nvme in "${!nvme_files[@]}" 00:02:34.740 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:02:34.740 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:34.740 + for nvme in "${!nvme_files[@]}" 00:02:34.740 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:02:34.740 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:34.740 + for nvme in "${!nvme_files[@]}" 00:02:34.740 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:02:34.740 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:34.740 + for nvme in "${!nvme_files[@]}" 00:02:34.741 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:02:34.741 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:34.741 + for nvme in "${!nvme_files[@]}" 00:02:34.741 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:02:34.741 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:34.741 + for nvme in "${!nvme_files[@]}" 00:02:34.741 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:02:34.741 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:34.741 + for nvme in "${!nvme_files[@]}" 00:02:34.741 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:02:35.677 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:35.677 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:02:35.677 + echo 'End stage prepare_nvme.sh' 00:02:35.677 End stage prepare_nvme.sh 00:02:35.688 [Pipeline] sh 00:02:35.970 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:35.970 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:02:35.970 00:02:35.970 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:02:35.970 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:02:35.970 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:35.970 HELP=0 00:02:35.970 DRY_RUN=0 00:02:35.970 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:02:35.970 NVME_DISKS_TYPE=nvme,nvme, 00:02:35.970 NVME_AUTO_CREATE=0 00:02:35.970 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:02:35.970 NVME_CMB=,, 00:02:35.970 NVME_PMR=,, 00:02:35.970 NVME_ZNS=,, 00:02:35.970 NVME_MS=,, 00:02:35.970 NVME_FDP=,, 00:02:35.970 SPDK_VAGRANT_DISTRO=fedora39 00:02:35.970 SPDK_VAGRANT_VMCPU=10 00:02:35.970 SPDK_VAGRANT_VMRAM=12288 00:02:35.970 SPDK_VAGRANT_PROVIDER=libvirt 00:02:35.970 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:35.970 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:35.970 SPDK_OPENSTACK_NETWORK=0 00:02:35.970 VAGRANT_PACKAGE_BOX=0 00:02:35.970 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:35.970 FORCE_DISTRO=true 00:02:35.970 VAGRANT_BOX_VERSION= 00:02:35.970 EXTRA_VAGRANTFILES= 00:02:35.970 NIC_MODEL=e1000 00:02:35.970 00:02:35.970 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:02:35.970 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:39.260 Bringing machine 'default' up with 'libvirt' provider... 00:02:39.260 ==> default: Creating image (snapshot of base box volume). 00:02:39.520 ==> default: Creating domain with the following settings... 00:02:39.520 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732010213_754588c2d7f7c12181fd 00:02:39.520 ==> default: -- Domain type: kvm 00:02:39.520 ==> default: -- Cpus: 10 00:02:39.520 ==> default: -- Feature: acpi 00:02:39.520 ==> default: -- Feature: apic 00:02:39.520 ==> default: -- Feature: pae 00:02:39.520 ==> default: -- Memory: 12288M 00:02:39.520 ==> default: -- Memory Backing: hugepages: 00:02:39.520 ==> default: -- Management MAC: 00:02:39.520 ==> default: -- Loader: 00:02:39.520 ==> default: -- Nvram: 00:02:39.520 ==> default: -- Base box: spdk/fedora39 00:02:39.520 ==> default: -- Storage pool: default 00:02:39.520 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732010213_754588c2d7f7c12181fd.img (20G) 00:02:39.520 ==> default: -- Volume Cache: default 00:02:39.520 ==> default: -- Kernel: 00:02:39.520 ==> default: -- Initrd: 00:02:39.520 ==> default: -- Graphics Type: vnc 00:02:39.520 ==> default: -- Graphics Port: -1 00:02:39.520 ==> default: -- Graphics IP: 127.0.0.1 00:02:39.520 ==> default: -- Graphics Password: Not defined 00:02:39.520 ==> default: -- Video Type: cirrus 00:02:39.520 ==> default: -- Video VRAM: 9216 00:02:39.520 ==> default: -- Sound Type: 00:02:39.520 ==> default: -- Keymap: en-us 00:02:39.520 ==> default: -- TPM Path: 00:02:39.520 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:39.520 ==> default: -- Command line args: 00:02:39.520 ==> default: -> value=-device, 00:02:39.520 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:39.520 ==> default: -> value=-drive, 00:02:39.520 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:02:39.521 ==> default: -> value=-device, 00:02:39.521 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:39.521 ==> default: -> value=-device, 00:02:39.521 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:39.521 ==> default: -> value=-drive, 00:02:39.521 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:39.521 ==> default: -> value=-device, 00:02:39.521 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:39.521 ==> default: -> value=-drive, 00:02:39.521 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:39.521 ==> default: -> value=-device, 00:02:39.521 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:39.521 ==> default: -> value=-drive, 00:02:39.521 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:39.521 ==> default: -> value=-device, 00:02:39.521 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:39.780 ==> default: Creating shared folders metadata... 00:02:39.780 ==> default: Starting domain. 00:02:41.159 ==> default: Waiting for domain to get an IP address... 00:02:59.256 ==> default: Waiting for SSH to become available... 00:03:00.634 ==> default: Configuring and enabling network interfaces... 00:03:04.825 default: SSH address: 192.168.121.65:22 00:03:04.825 default: SSH username: vagrant 00:03:04.825 default: SSH auth method: private key 00:03:06.728 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:14.849 ==> default: Mounting SSHFS shared folder... 00:03:15.786 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:15.786 ==> default: Checking Mount.. 00:03:17.168 ==> default: Folder Successfully Mounted! 00:03:17.168 ==> default: Running provisioner: file... 00:03:18.106 default: ~/.gitconfig => .gitconfig 00:03:18.366 00:03:18.366 SUCCESS! 00:03:18.366 00:03:18.366 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:03:18.366 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:18.366 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:03:18.366 00:03:18.375 [Pipeline] } 00:03:18.390 [Pipeline] // stage 00:03:18.398 [Pipeline] dir 00:03:18.399 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:03:18.401 [Pipeline] { 00:03:18.413 [Pipeline] catchError 00:03:18.415 [Pipeline] { 00:03:18.427 [Pipeline] sh 00:03:18.708 + vagrant ssh-config --host vagrant 00:03:18.708 + sed -ne /^Host/,$p 00:03:18.708 + tee ssh_conf 00:03:21.999 Host vagrant 00:03:21.999 HostName 192.168.121.65 00:03:21.999 User vagrant 00:03:21.999 Port 22 00:03:21.999 UserKnownHostsFile /dev/null 00:03:21.999 StrictHostKeyChecking no 00:03:21.999 PasswordAuthentication no 00:03:21.999 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:21.999 IdentitiesOnly yes 00:03:21.999 LogLevel FATAL 00:03:21.999 ForwardAgent yes 00:03:21.999 ForwardX11 yes 00:03:21.999 00:03:22.014 [Pipeline] withEnv 00:03:22.016 [Pipeline] { 00:03:22.030 [Pipeline] sh 00:03:22.310 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:22.310 source /etc/os-release 00:03:22.310 [[ -e /image.version ]] && img=$(< /image.version) 00:03:22.310 # Minimal, systemd-like check. 00:03:22.310 if [[ -e /.dockerenv ]]; then 00:03:22.310 # Clear garbage from the node's name: 00:03:22.310 # agt-er_autotest_547-896 -> autotest_547-896 00:03:22.310 # $HOSTNAME is the actual container id 00:03:22.310 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:22.310 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:22.310 # We can assume this is a mount from a host where container is running, 00:03:22.310 # so fetch its hostname to easily identify the target swarm worker. 00:03:22.310 container="$(< /etc/hostname) ($agent)" 00:03:22.310 else 00:03:22.310 # Fallback 00:03:22.310 container=$agent 00:03:22.310 fi 00:03:22.310 fi 00:03:22.310 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:22.310 00:03:22.321 [Pipeline] } 00:03:22.337 [Pipeline] // withEnv 00:03:22.346 [Pipeline] setCustomBuildProperty 00:03:22.361 [Pipeline] stage 00:03:22.363 [Pipeline] { (Tests) 00:03:22.379 [Pipeline] sh 00:03:22.660 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:22.933 [Pipeline] sh 00:03:23.213 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:23.227 [Pipeline] timeout 00:03:23.228 Timeout set to expire in 1 hr 0 min 00:03:23.229 [Pipeline] { 00:03:23.244 [Pipeline] sh 00:03:23.528 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:24.098 HEAD is now at fc96810c2 bdev: remove bdev from examine allow list on unregister 00:03:24.111 [Pipeline] sh 00:03:24.396 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:24.668 [Pipeline] sh 00:03:24.949 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:25.225 [Pipeline] sh 00:03:25.503 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:03:25.762 ++ readlink -f spdk_repo 00:03:25.762 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:25.762 + [[ -n /home/vagrant/spdk_repo ]] 00:03:25.762 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:25.762 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:25.762 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:25.762 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:25.762 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:25.762 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:03:25.762 + cd /home/vagrant/spdk_repo 00:03:25.762 + source /etc/os-release 00:03:25.762 ++ NAME='Fedora Linux' 00:03:25.762 ++ VERSION='39 (Cloud Edition)' 00:03:25.762 ++ ID=fedora 00:03:25.762 ++ VERSION_ID=39 00:03:25.762 ++ VERSION_CODENAME= 00:03:25.762 ++ PLATFORM_ID=platform:f39 00:03:25.762 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:25.762 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:25.762 ++ LOGO=fedora-logo-icon 00:03:25.762 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:25.762 ++ HOME_URL=https://fedoraproject.org/ 00:03:25.762 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:25.762 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:25.762 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:25.762 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:25.762 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:25.762 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:25.762 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:25.762 ++ SUPPORT_END=2024-11-12 00:03:25.762 ++ VARIANT='Cloud Edition' 00:03:25.762 ++ VARIANT_ID=cloud 00:03:25.762 + uname -a 00:03:25.762 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:25.762 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:26.331 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:26.331 Hugepages 00:03:26.331 node hugesize free / total 00:03:26.331 node0 1048576kB 0 / 0 00:03:26.331 node0 2048kB 0 / 0 00:03:26.331 00:03:26.331 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:26.331 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:26.331 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:26.331 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:26.331 + rm -f /tmp/spdk-ld-path 00:03:26.331 + source autorun-spdk.conf 00:03:26.331 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:26.331 ++ SPDK_TEST_NVMF=1 00:03:26.331 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:26.331 ++ SPDK_TEST_URING=1 00:03:26.331 ++ SPDK_TEST_USDT=1 00:03:26.331 ++ SPDK_RUN_UBSAN=1 00:03:26.331 ++ NET_TYPE=virt 00:03:26.331 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:26.331 ++ RUN_NIGHTLY=0 00:03:26.331 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:26.331 + [[ -n '' ]] 00:03:26.331 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:26.331 + for M in /var/spdk/build-*-manifest.txt 00:03:26.331 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:26.331 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:26.331 + for M in /var/spdk/build-*-manifest.txt 00:03:26.331 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:26.331 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:26.331 + for M in /var/spdk/build-*-manifest.txt 00:03:26.331 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:26.331 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:26.331 ++ uname 00:03:26.331 + [[ Linux == \L\i\n\u\x ]] 00:03:26.331 + sudo dmesg -T 00:03:26.331 + sudo dmesg --clear 00:03:26.331 + dmesg_pid=5211 00:03:26.331 + sudo dmesg -Tw 00:03:26.331 + [[ Fedora Linux == FreeBSD ]] 00:03:26.331 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:26.331 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:26.331 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:26.331 + [[ -x /usr/src/fio-static/fio ]] 00:03:26.331 + export FIO_BIN=/usr/src/fio-static/fio 00:03:26.331 + FIO_BIN=/usr/src/fio-static/fio 00:03:26.331 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:26.331 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:26.331 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:26.331 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:26.331 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:26.331 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:26.331 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:26.331 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:26.331 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:26.590 09:57:40 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:26.590 09:57:40 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:26.590 09:57:40 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:26.590 09:57:40 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:26.590 09:57:40 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:26.590 09:57:40 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:03:26.590 09:57:40 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:03:26.590 09:57:40 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:26.590 09:57:40 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:03:26.590 09:57:40 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:26.590 09:57:40 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:03:26.590 09:57:40 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:26.590 09:57:40 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:26.590 09:57:40 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:26.590 09:57:40 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:26.590 09:57:40 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:26.590 09:57:40 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:26.590 09:57:40 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:26.590 09:57:40 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:26.590 09:57:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.590 09:57:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.590 09:57:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.590 09:57:40 -- paths/export.sh@5 -- $ export PATH 00:03:26.590 09:57:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.590 09:57:40 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:26.590 09:57:40 -- common/autobuild_common.sh@486 -- $ date +%s 00:03:26.590 09:57:40 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732010260.XXXXXX 00:03:26.590 09:57:40 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732010260.lTHe7A 00:03:26.590 09:57:40 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:03:26.590 09:57:40 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:03:26.590 09:57:40 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:26.590 09:57:40 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:26.590 09:57:40 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:26.590 09:57:40 -- common/autobuild_common.sh@502 -- $ get_config_params 00:03:26.590 09:57:40 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:26.590 09:57:40 -- common/autotest_common.sh@10 -- $ set +x 00:03:26.590 09:57:40 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:03:26.590 09:57:40 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:03:26.590 09:57:40 -- pm/common@17 -- $ local monitor 00:03:26.590 09:57:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.590 09:57:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.590 09:57:40 -- pm/common@25 -- $ sleep 1 00:03:26.590 09:57:40 -- pm/common@21 -- $ date +%s 00:03:26.590 09:57:40 -- pm/common@21 -- $ date +%s 00:03:26.590 09:57:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732010260 00:03:26.590 09:57:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732010260 00:03:26.590 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732010260_collect-vmstat.pm.log 00:03:26.590 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732010260_collect-cpu-load.pm.log 00:03:27.527 09:57:41 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:03:27.527 09:57:41 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:27.527 09:57:41 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:27.527 09:57:41 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:27.527 09:57:41 -- spdk/autobuild.sh@16 -- $ date -u 00:03:27.527 Tue Nov 19 09:57:41 AM UTC 2024 00:03:27.527 09:57:41 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:27.527 v25.01-pre-194-gfc96810c2 00:03:27.527 09:57:41 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:27.527 09:57:41 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:27.527 09:57:41 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:27.527 09:57:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:27.527 09:57:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:27.527 09:57:41 -- common/autotest_common.sh@10 -- $ set +x 00:03:27.527 ************************************ 00:03:27.527 START TEST ubsan 00:03:27.527 ************************************ 00:03:27.527 using ubsan 00:03:27.527 09:57:41 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:27.527 00:03:27.527 real 0m0.000s 00:03:27.527 user 0m0.000s 00:03:27.527 sys 0m0.000s 00:03:27.527 09:57:41 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:27.527 09:57:41 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:27.527 ************************************ 00:03:27.527 END TEST ubsan 00:03:27.527 ************************************ 00:03:27.527 09:57:41 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:27.527 09:57:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:27.527 09:57:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:27.527 09:57:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:27.527 09:57:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:27.527 09:57:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:27.527 09:57:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:27.527 09:57:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:27.527 09:57:41 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:03:27.786 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:27.786 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:28.045 Using 'verbs' RDMA provider 00:03:43.864 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:56.073 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:56.073 Creating mk/config.mk...done. 00:03:56.073 Creating mk/cc.flags.mk...done. 00:03:56.073 Type 'make' to build. 00:03:56.073 09:58:08 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:56.073 09:58:08 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:56.073 09:58:08 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:56.073 09:58:08 -- common/autotest_common.sh@10 -- $ set +x 00:03:56.073 ************************************ 00:03:56.073 START TEST make 00:03:56.073 ************************************ 00:03:56.073 09:58:08 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:56.073 make[1]: Nothing to be done for 'all'. 00:04:08.288 The Meson build system 00:04:08.288 Version: 1.5.0 00:04:08.288 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:08.288 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:08.288 Build type: native build 00:04:08.288 Program cat found: YES (/usr/bin/cat) 00:04:08.288 Project name: DPDK 00:04:08.288 Project version: 24.03.0 00:04:08.288 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:08.288 C linker for the host machine: cc ld.bfd 2.40-14 00:04:08.288 Host machine cpu family: x86_64 00:04:08.288 Host machine cpu: x86_64 00:04:08.288 Message: ## Building in Developer Mode ## 00:04:08.288 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:08.288 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:08.288 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:08.288 Program python3 found: YES (/usr/bin/python3) 00:04:08.288 Program cat found: YES (/usr/bin/cat) 00:04:08.288 Compiler for C supports arguments -march=native: YES 00:04:08.288 Checking for size of "void *" : 8 00:04:08.288 Checking for size of "void *" : 8 (cached) 00:04:08.288 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:08.288 Library m found: YES 00:04:08.288 Library numa found: YES 00:04:08.288 Has header "numaif.h" : YES 00:04:08.288 Library fdt found: NO 00:04:08.288 Library execinfo found: NO 00:04:08.288 Has header "execinfo.h" : YES 00:04:08.288 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:08.288 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:08.288 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:08.288 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:08.288 Run-time dependency openssl found: YES 3.1.1 00:04:08.288 Run-time dependency libpcap found: YES 1.10.4 00:04:08.288 Has header "pcap.h" with dependency libpcap: YES 00:04:08.288 Compiler for C supports arguments -Wcast-qual: YES 00:04:08.288 Compiler for C supports arguments -Wdeprecated: YES 00:04:08.288 Compiler for C supports arguments -Wformat: YES 00:04:08.288 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:08.288 Compiler for C supports arguments -Wformat-security: NO 00:04:08.288 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:08.288 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:08.288 Compiler for C supports arguments -Wnested-externs: YES 00:04:08.288 Compiler for C supports arguments -Wold-style-definition: YES 00:04:08.288 Compiler for C supports arguments -Wpointer-arith: YES 00:04:08.288 Compiler for C supports arguments -Wsign-compare: YES 00:04:08.288 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:08.288 Compiler for C supports arguments -Wundef: YES 00:04:08.288 Compiler for C supports arguments -Wwrite-strings: YES 00:04:08.288 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:08.288 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:08.288 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:08.288 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:08.288 Program objdump found: YES (/usr/bin/objdump) 00:04:08.288 Compiler for C supports arguments -mavx512f: YES 00:04:08.288 Checking if "AVX512 checking" compiles: YES 00:04:08.288 Fetching value of define "__SSE4_2__" : 1 00:04:08.288 Fetching value of define "__AES__" : 1 00:04:08.288 Fetching value of define "__AVX__" : 1 00:04:08.288 Fetching value of define "__AVX2__" : 1 00:04:08.288 Fetching value of define "__AVX512BW__" : (undefined) 00:04:08.288 Fetching value of define "__AVX512CD__" : (undefined) 00:04:08.288 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:08.288 Fetching value of define "__AVX512F__" : (undefined) 00:04:08.288 Fetching value of define "__AVX512VL__" : (undefined) 00:04:08.288 Fetching value of define "__PCLMUL__" : 1 00:04:08.288 Fetching value of define "__RDRND__" : 1 00:04:08.288 Fetching value of define "__RDSEED__" : 1 00:04:08.288 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:08.288 Fetching value of define "__znver1__" : (undefined) 00:04:08.288 Fetching value of define "__znver2__" : (undefined) 00:04:08.288 Fetching value of define "__znver3__" : (undefined) 00:04:08.288 Fetching value of define "__znver4__" : (undefined) 00:04:08.288 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:08.288 Message: lib/log: Defining dependency "log" 00:04:08.288 Message: lib/kvargs: Defining dependency "kvargs" 00:04:08.288 Message: lib/telemetry: Defining dependency "telemetry" 00:04:08.288 Checking for function "getentropy" : NO 00:04:08.288 Message: lib/eal: Defining dependency "eal" 00:04:08.288 Message: lib/ring: Defining dependency "ring" 00:04:08.288 Message: lib/rcu: Defining dependency "rcu" 00:04:08.288 Message: lib/mempool: Defining dependency "mempool" 00:04:08.288 Message: lib/mbuf: Defining dependency "mbuf" 00:04:08.288 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:08.288 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:08.288 Compiler for C supports arguments -mpclmul: YES 00:04:08.288 Compiler for C supports arguments -maes: YES 00:04:08.288 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:08.288 Compiler for C supports arguments -mavx512bw: YES 00:04:08.288 Compiler for C supports arguments -mavx512dq: YES 00:04:08.288 Compiler for C supports arguments -mavx512vl: YES 00:04:08.288 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:08.288 Compiler for C supports arguments -mavx2: YES 00:04:08.288 Compiler for C supports arguments -mavx: YES 00:04:08.288 Message: lib/net: Defining dependency "net" 00:04:08.288 Message: lib/meter: Defining dependency "meter" 00:04:08.288 Message: lib/ethdev: Defining dependency "ethdev" 00:04:08.288 Message: lib/pci: Defining dependency "pci" 00:04:08.288 Message: lib/cmdline: Defining dependency "cmdline" 00:04:08.288 Message: lib/hash: Defining dependency "hash" 00:04:08.288 Message: lib/timer: Defining dependency "timer" 00:04:08.288 Message: lib/compressdev: Defining dependency "compressdev" 00:04:08.288 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:08.288 Message: lib/dmadev: Defining dependency "dmadev" 00:04:08.288 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:08.288 Message: lib/power: Defining dependency "power" 00:04:08.288 Message: lib/reorder: Defining dependency "reorder" 00:04:08.288 Message: lib/security: Defining dependency "security" 00:04:08.288 Has header "linux/userfaultfd.h" : YES 00:04:08.288 Has header "linux/vduse.h" : YES 00:04:08.288 Message: lib/vhost: Defining dependency "vhost" 00:04:08.288 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:08.288 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:08.288 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:08.288 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:08.288 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:08.288 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:08.288 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:08.288 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:08.288 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:08.288 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:08.288 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:08.288 Configuring doxy-api-html.conf using configuration 00:04:08.288 Configuring doxy-api-man.conf using configuration 00:04:08.288 Program mandb found: YES (/usr/bin/mandb) 00:04:08.288 Program sphinx-build found: NO 00:04:08.288 Configuring rte_build_config.h using configuration 00:04:08.288 Message: 00:04:08.288 ================= 00:04:08.288 Applications Enabled 00:04:08.288 ================= 00:04:08.288 00:04:08.288 apps: 00:04:08.288 00:04:08.288 00:04:08.288 Message: 00:04:08.288 ================= 00:04:08.288 Libraries Enabled 00:04:08.288 ================= 00:04:08.288 00:04:08.288 libs: 00:04:08.288 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:08.288 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:08.288 cryptodev, dmadev, power, reorder, security, vhost, 00:04:08.288 00:04:08.288 Message: 00:04:08.288 =============== 00:04:08.288 Drivers Enabled 00:04:08.288 =============== 00:04:08.288 00:04:08.288 common: 00:04:08.288 00:04:08.288 bus: 00:04:08.288 pci, vdev, 00:04:08.288 mempool: 00:04:08.288 ring, 00:04:08.288 dma: 00:04:08.288 00:04:08.288 net: 00:04:08.288 00:04:08.288 crypto: 00:04:08.288 00:04:08.288 compress: 00:04:08.288 00:04:08.288 vdpa: 00:04:08.288 00:04:08.288 00:04:08.288 Message: 00:04:08.288 ================= 00:04:08.288 Content Skipped 00:04:08.288 ================= 00:04:08.288 00:04:08.288 apps: 00:04:08.288 dumpcap: explicitly disabled via build config 00:04:08.288 graph: explicitly disabled via build config 00:04:08.288 pdump: explicitly disabled via build config 00:04:08.288 proc-info: explicitly disabled via build config 00:04:08.288 test-acl: explicitly disabled via build config 00:04:08.288 test-bbdev: explicitly disabled via build config 00:04:08.288 test-cmdline: explicitly disabled via build config 00:04:08.288 test-compress-perf: explicitly disabled via build config 00:04:08.288 test-crypto-perf: explicitly disabled via build config 00:04:08.289 test-dma-perf: explicitly disabled via build config 00:04:08.289 test-eventdev: explicitly disabled via build config 00:04:08.289 test-fib: explicitly disabled via build config 00:04:08.289 test-flow-perf: explicitly disabled via build config 00:04:08.289 test-gpudev: explicitly disabled via build config 00:04:08.289 test-mldev: explicitly disabled via build config 00:04:08.289 test-pipeline: explicitly disabled via build config 00:04:08.289 test-pmd: explicitly disabled via build config 00:04:08.289 test-regex: explicitly disabled via build config 00:04:08.289 test-sad: explicitly disabled via build config 00:04:08.289 test-security-perf: explicitly disabled via build config 00:04:08.289 00:04:08.289 libs: 00:04:08.289 argparse: explicitly disabled via build config 00:04:08.289 metrics: explicitly disabled via build config 00:04:08.289 acl: explicitly disabled via build config 00:04:08.289 bbdev: explicitly disabled via build config 00:04:08.289 bitratestats: explicitly disabled via build config 00:04:08.289 bpf: explicitly disabled via build config 00:04:08.289 cfgfile: explicitly disabled via build config 00:04:08.289 distributor: explicitly disabled via build config 00:04:08.289 efd: explicitly disabled via build config 00:04:08.289 eventdev: explicitly disabled via build config 00:04:08.289 dispatcher: explicitly disabled via build config 00:04:08.289 gpudev: explicitly disabled via build config 00:04:08.289 gro: explicitly disabled via build config 00:04:08.289 gso: explicitly disabled via build config 00:04:08.289 ip_frag: explicitly disabled via build config 00:04:08.289 jobstats: explicitly disabled via build config 00:04:08.289 latencystats: explicitly disabled via build config 00:04:08.289 lpm: explicitly disabled via build config 00:04:08.289 member: explicitly disabled via build config 00:04:08.289 pcapng: explicitly disabled via build config 00:04:08.289 rawdev: explicitly disabled via build config 00:04:08.289 regexdev: explicitly disabled via build config 00:04:08.289 mldev: explicitly disabled via build config 00:04:08.289 rib: explicitly disabled via build config 00:04:08.289 sched: explicitly disabled via build config 00:04:08.289 stack: explicitly disabled via build config 00:04:08.289 ipsec: explicitly disabled via build config 00:04:08.289 pdcp: explicitly disabled via build config 00:04:08.289 fib: explicitly disabled via build config 00:04:08.289 port: explicitly disabled via build config 00:04:08.289 pdump: explicitly disabled via build config 00:04:08.289 table: explicitly disabled via build config 00:04:08.289 pipeline: explicitly disabled via build config 00:04:08.289 graph: explicitly disabled via build config 00:04:08.289 node: explicitly disabled via build config 00:04:08.289 00:04:08.289 drivers: 00:04:08.289 common/cpt: not in enabled drivers build config 00:04:08.289 common/dpaax: not in enabled drivers build config 00:04:08.289 common/iavf: not in enabled drivers build config 00:04:08.289 common/idpf: not in enabled drivers build config 00:04:08.289 common/ionic: not in enabled drivers build config 00:04:08.289 common/mvep: not in enabled drivers build config 00:04:08.289 common/octeontx: not in enabled drivers build config 00:04:08.289 bus/auxiliary: not in enabled drivers build config 00:04:08.289 bus/cdx: not in enabled drivers build config 00:04:08.289 bus/dpaa: not in enabled drivers build config 00:04:08.289 bus/fslmc: not in enabled drivers build config 00:04:08.289 bus/ifpga: not in enabled drivers build config 00:04:08.289 bus/platform: not in enabled drivers build config 00:04:08.289 bus/uacce: not in enabled drivers build config 00:04:08.289 bus/vmbus: not in enabled drivers build config 00:04:08.289 common/cnxk: not in enabled drivers build config 00:04:08.289 common/mlx5: not in enabled drivers build config 00:04:08.289 common/nfp: not in enabled drivers build config 00:04:08.289 common/nitrox: not in enabled drivers build config 00:04:08.289 common/qat: not in enabled drivers build config 00:04:08.289 common/sfc_efx: not in enabled drivers build config 00:04:08.289 mempool/bucket: not in enabled drivers build config 00:04:08.289 mempool/cnxk: not in enabled drivers build config 00:04:08.289 mempool/dpaa: not in enabled drivers build config 00:04:08.289 mempool/dpaa2: not in enabled drivers build config 00:04:08.289 mempool/octeontx: not in enabled drivers build config 00:04:08.289 mempool/stack: not in enabled drivers build config 00:04:08.289 dma/cnxk: not in enabled drivers build config 00:04:08.289 dma/dpaa: not in enabled drivers build config 00:04:08.289 dma/dpaa2: not in enabled drivers build config 00:04:08.289 dma/hisilicon: not in enabled drivers build config 00:04:08.289 dma/idxd: not in enabled drivers build config 00:04:08.289 dma/ioat: not in enabled drivers build config 00:04:08.289 dma/skeleton: not in enabled drivers build config 00:04:08.289 net/af_packet: not in enabled drivers build config 00:04:08.289 net/af_xdp: not in enabled drivers build config 00:04:08.289 net/ark: not in enabled drivers build config 00:04:08.289 net/atlantic: not in enabled drivers build config 00:04:08.289 net/avp: not in enabled drivers build config 00:04:08.289 net/axgbe: not in enabled drivers build config 00:04:08.289 net/bnx2x: not in enabled drivers build config 00:04:08.289 net/bnxt: not in enabled drivers build config 00:04:08.289 net/bonding: not in enabled drivers build config 00:04:08.289 net/cnxk: not in enabled drivers build config 00:04:08.289 net/cpfl: not in enabled drivers build config 00:04:08.289 net/cxgbe: not in enabled drivers build config 00:04:08.289 net/dpaa: not in enabled drivers build config 00:04:08.289 net/dpaa2: not in enabled drivers build config 00:04:08.289 net/e1000: not in enabled drivers build config 00:04:08.289 net/ena: not in enabled drivers build config 00:04:08.289 net/enetc: not in enabled drivers build config 00:04:08.289 net/enetfec: not in enabled drivers build config 00:04:08.289 net/enic: not in enabled drivers build config 00:04:08.289 net/failsafe: not in enabled drivers build config 00:04:08.289 net/fm10k: not in enabled drivers build config 00:04:08.289 net/gve: not in enabled drivers build config 00:04:08.289 net/hinic: not in enabled drivers build config 00:04:08.289 net/hns3: not in enabled drivers build config 00:04:08.289 net/i40e: not in enabled drivers build config 00:04:08.289 net/iavf: not in enabled drivers build config 00:04:08.289 net/ice: not in enabled drivers build config 00:04:08.289 net/idpf: not in enabled drivers build config 00:04:08.289 net/igc: not in enabled drivers build config 00:04:08.289 net/ionic: not in enabled drivers build config 00:04:08.289 net/ipn3ke: not in enabled drivers build config 00:04:08.289 net/ixgbe: not in enabled drivers build config 00:04:08.289 net/mana: not in enabled drivers build config 00:04:08.289 net/memif: not in enabled drivers build config 00:04:08.289 net/mlx4: not in enabled drivers build config 00:04:08.289 net/mlx5: not in enabled drivers build config 00:04:08.289 net/mvneta: not in enabled drivers build config 00:04:08.289 net/mvpp2: not in enabled drivers build config 00:04:08.289 net/netvsc: not in enabled drivers build config 00:04:08.289 net/nfb: not in enabled drivers build config 00:04:08.289 net/nfp: not in enabled drivers build config 00:04:08.289 net/ngbe: not in enabled drivers build config 00:04:08.289 net/null: not in enabled drivers build config 00:04:08.289 net/octeontx: not in enabled drivers build config 00:04:08.289 net/octeon_ep: not in enabled drivers build config 00:04:08.289 net/pcap: not in enabled drivers build config 00:04:08.289 net/pfe: not in enabled drivers build config 00:04:08.289 net/qede: not in enabled drivers build config 00:04:08.289 net/ring: not in enabled drivers build config 00:04:08.289 net/sfc: not in enabled drivers build config 00:04:08.289 net/softnic: not in enabled drivers build config 00:04:08.289 net/tap: not in enabled drivers build config 00:04:08.289 net/thunderx: not in enabled drivers build config 00:04:08.289 net/txgbe: not in enabled drivers build config 00:04:08.289 net/vdev_netvsc: not in enabled drivers build config 00:04:08.289 net/vhost: not in enabled drivers build config 00:04:08.289 net/virtio: not in enabled drivers build config 00:04:08.289 net/vmxnet3: not in enabled drivers build config 00:04:08.289 raw/*: missing internal dependency, "rawdev" 00:04:08.289 crypto/armv8: not in enabled drivers build config 00:04:08.289 crypto/bcmfs: not in enabled drivers build config 00:04:08.289 crypto/caam_jr: not in enabled drivers build config 00:04:08.289 crypto/ccp: not in enabled drivers build config 00:04:08.289 crypto/cnxk: not in enabled drivers build config 00:04:08.289 crypto/dpaa_sec: not in enabled drivers build config 00:04:08.289 crypto/dpaa2_sec: not in enabled drivers build config 00:04:08.289 crypto/ipsec_mb: not in enabled drivers build config 00:04:08.289 crypto/mlx5: not in enabled drivers build config 00:04:08.289 crypto/mvsam: not in enabled drivers build config 00:04:08.289 crypto/nitrox: not in enabled drivers build config 00:04:08.289 crypto/null: not in enabled drivers build config 00:04:08.289 crypto/octeontx: not in enabled drivers build config 00:04:08.289 crypto/openssl: not in enabled drivers build config 00:04:08.289 crypto/scheduler: not in enabled drivers build config 00:04:08.289 crypto/uadk: not in enabled drivers build config 00:04:08.289 crypto/virtio: not in enabled drivers build config 00:04:08.289 compress/isal: not in enabled drivers build config 00:04:08.289 compress/mlx5: not in enabled drivers build config 00:04:08.289 compress/nitrox: not in enabled drivers build config 00:04:08.289 compress/octeontx: not in enabled drivers build config 00:04:08.289 compress/zlib: not in enabled drivers build config 00:04:08.289 regex/*: missing internal dependency, "regexdev" 00:04:08.289 ml/*: missing internal dependency, "mldev" 00:04:08.289 vdpa/ifc: not in enabled drivers build config 00:04:08.289 vdpa/mlx5: not in enabled drivers build config 00:04:08.289 vdpa/nfp: not in enabled drivers build config 00:04:08.289 vdpa/sfc: not in enabled drivers build config 00:04:08.289 event/*: missing internal dependency, "eventdev" 00:04:08.289 baseband/*: missing internal dependency, "bbdev" 00:04:08.289 gpu/*: missing internal dependency, "gpudev" 00:04:08.289 00:04:08.289 00:04:08.289 Build targets in project: 85 00:04:08.289 00:04:08.289 DPDK 24.03.0 00:04:08.289 00:04:08.289 User defined options 00:04:08.289 buildtype : debug 00:04:08.289 default_library : shared 00:04:08.289 libdir : lib 00:04:08.289 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:08.290 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:08.290 c_link_args : 00:04:08.290 cpu_instruction_set: native 00:04:08.290 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:08.290 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:08.290 enable_docs : false 00:04:08.290 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:08.290 enable_kmods : false 00:04:08.290 max_lcores : 128 00:04:08.290 tests : false 00:04:08.290 00:04:08.290 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:08.290 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:08.290 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:08.290 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:08.290 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:08.290 [4/268] Linking static target lib/librte_log.a 00:04:08.290 [5/268] Linking static target lib/librte_kvargs.a 00:04:08.290 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:08.858 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.858 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:09.117 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:09.117 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:09.117 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:09.117 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:09.117 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:09.117 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:09.117 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:09.117 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:09.117 [17/268] Linking static target lib/librte_telemetry.a 00:04:09.375 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:09.375 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.375 [20/268] Linking target lib/librte_log.so.24.1 00:04:09.634 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:09.634 [22/268] Linking target lib/librte_kvargs.so.24.1 00:04:09.634 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:09.893 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:09.893 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:09.893 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:09.893 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:10.153 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:10.153 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:10.153 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.153 [31/268] Linking target lib/librte_telemetry.so.24.1 00:04:10.153 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:10.153 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:10.153 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:10.413 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:10.413 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:10.413 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:10.673 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:10.673 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:10.932 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:10.932 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:10.932 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:10.932 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:10.932 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:11.210 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:11.210 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:11.210 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:11.210 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:11.489 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:11.489 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:11.489 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:11.759 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:11.759 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:12.019 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:12.019 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:12.019 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:12.019 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:12.019 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:12.278 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:12.278 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:12.573 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:12.573 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:12.573 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:12.573 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:12.832 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:12.832 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:12.832 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:12.832 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:13.092 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:13.092 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:13.092 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:13.351 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:13.351 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:13.351 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:13.351 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:13.351 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:13.351 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:13.610 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:13.610 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:13.610 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:13.610 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:13.870 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:13.870 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:13.870 [84/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:13.870 [85/268] Linking static target lib/librte_rcu.a 00:04:13.870 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:13.870 [87/268] Linking static target lib/librte_ring.a 00:04:14.130 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:14.130 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:14.130 [90/268] Linking static target lib/librte_eal.a 00:04:14.130 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:14.389 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:14.389 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:14.389 [94/268] Linking static target lib/librte_mempool.a 00:04:14.389 [95/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.649 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:14.649 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.649 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:14.909 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:14.909 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:14.909 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:15.169 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:15.169 [103/268] Linking static target lib/librte_mbuf.a 00:04:15.169 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:15.169 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:15.169 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:15.428 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:15.687 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:15.687 [109/268] Linking static target lib/librte_net.a 00:04:15.687 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:15.687 [111/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.687 [112/268] Linking static target lib/librte_meter.a 00:04:15.946 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:15.946 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:15.946 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:16.205 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.205 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.205 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.205 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:16.464 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:16.723 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:16.982 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:16.982 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:16.982 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:16.982 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:17.241 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:17.241 [127/268] Linking static target lib/librte_pci.a 00:04:17.241 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:17.241 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:17.241 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:17.241 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:17.500 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:17.500 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:17.500 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.500 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:17.500 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:17.500 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:17.759 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:17.759 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:17.759 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:17.759 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:17.759 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:17.759 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:17.759 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:18.018 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:18.018 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:18.018 [147/268] Linking static target lib/librte_cmdline.a 00:04:18.018 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:18.018 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:18.277 [150/268] Linking static target lib/librte_ethdev.a 00:04:18.277 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:18.536 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:18.536 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:18.536 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:18.536 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:18.536 [156/268] Linking static target lib/librte_timer.a 00:04:18.795 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:18.795 [158/268] Linking static target lib/librte_hash.a 00:04:19.056 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:19.056 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:19.056 [161/268] Linking static target lib/librte_compressdev.a 00:04:19.056 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:19.316 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:19.316 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.316 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:19.575 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:19.575 [167/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.575 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:19.834 [169/268] Linking static target lib/librte_dmadev.a 00:04:19.835 [170/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:19.835 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:19.835 [172/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.835 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:20.094 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:20.094 [175/268] Linking static target lib/librte_cryptodev.a 00:04:20.094 [176/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:20.094 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.352 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:20.611 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:20.611 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:20.611 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:20.611 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:20.611 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.611 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:20.871 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:20.871 [186/268] Linking static target lib/librte_power.a 00:04:21.131 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:21.131 [188/268] Linking static target lib/librte_reorder.a 00:04:21.390 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:21.390 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:21.390 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:21.390 [192/268] Linking static target lib/librte_security.a 00:04:21.390 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:21.702 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:21.702 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.961 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.220 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.220 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:22.479 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:22.479 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:22.479 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.479 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:22.738 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:22.738 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:22.996 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:23.254 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:23.254 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:23.254 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:23.254 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:23.254 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:23.254 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:23.513 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:23.513 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:23.513 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:23.513 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:23.513 [216/268] Linking static target drivers/librte_bus_vdev.a 00:04:23.513 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:23.513 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:23.513 [219/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:23.513 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:23.513 [221/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:23.772 [222/268] Linking static target drivers/librte_bus_pci.a 00:04:23.772 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:23.772 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:23.772 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:23.772 [226/268] Linking static target drivers/librte_mempool_ring.a 00:04:23.772 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.031 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.599 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:24.599 [230/268] Linking static target lib/librte_vhost.a 00:04:25.536 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.536 [232/268] Linking target lib/librte_eal.so.24.1 00:04:25.795 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:25.795 [234/268] Linking target lib/librte_dmadev.so.24.1 00:04:25.795 [235/268] Linking target lib/librte_timer.so.24.1 00:04:25.795 [236/268] Linking target lib/librte_ring.so.24.1 00:04:25.795 [237/268] Linking target lib/librte_meter.so.24.1 00:04:25.795 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:25.795 [239/268] Linking target lib/librte_pci.so.24.1 00:04:26.054 [240/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.054 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:26.054 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:26.054 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:26.054 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:26.054 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:26.054 [246/268] Linking target lib/librte_rcu.so.24.1 00:04:26.054 [247/268] Linking target lib/librte_mempool.so.24.1 00:04:26.054 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:26.054 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:26.054 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:26.054 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:26.054 [252/268] Linking target lib/librte_mbuf.so.24.1 00:04:26.313 [253/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.313 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:26.313 [255/268] Linking target lib/librte_compressdev.so.24.1 00:04:26.313 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:04:26.313 [257/268] Linking target lib/librte_net.so.24.1 00:04:26.313 [258/268] Linking target lib/librte_reorder.so.24.1 00:04:26.572 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:26.572 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:26.572 [261/268] Linking target lib/librte_security.so.24.1 00:04:26.572 [262/268] Linking target lib/librte_cmdline.so.24.1 00:04:26.572 [263/268] Linking target lib/librte_hash.so.24.1 00:04:26.572 [264/268] Linking target lib/librte_ethdev.so.24.1 00:04:26.831 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:26.831 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:26.831 [267/268] Linking target lib/librte_power.so.24.1 00:04:26.831 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:26.831 INFO: autodetecting backend as ninja 00:04:26.831 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:53.386 CC lib/ut/ut.o 00:04:53.386 CC lib/ut_mock/mock.o 00:04:53.386 CC lib/log/log.o 00:04:53.386 CC lib/log/log_flags.o 00:04:53.386 CC lib/log/log_deprecated.o 00:04:53.386 LIB libspdk_ut_mock.a 00:04:53.386 LIB libspdk_ut.a 00:04:53.386 SO libspdk_ut_mock.so.6.0 00:04:53.386 LIB libspdk_log.a 00:04:53.386 SO libspdk_ut.so.2.0 00:04:53.386 SO libspdk_log.so.7.1 00:04:53.386 SYMLINK libspdk_ut_mock.so 00:04:53.386 SYMLINK libspdk_ut.so 00:04:53.386 SYMLINK libspdk_log.so 00:04:53.386 CXX lib/trace_parser/trace.o 00:04:53.386 CC lib/util/base64.o 00:04:53.386 CC lib/util/cpuset.o 00:04:53.386 CC lib/util/bit_array.o 00:04:53.386 CC lib/util/crc16.o 00:04:53.386 CC lib/util/crc32c.o 00:04:53.386 CC lib/util/crc32.o 00:04:53.386 CC lib/ioat/ioat.o 00:04:53.386 CC lib/dma/dma.o 00:04:53.386 CC lib/vfio_user/host/vfio_user_pci.o 00:04:53.386 CC lib/util/crc32_ieee.o 00:04:53.386 CC lib/util/crc64.o 00:04:53.386 CC lib/util/dif.o 00:04:53.386 CC lib/util/fd.o 00:04:53.386 CC lib/vfio_user/host/vfio_user.o 00:04:53.386 LIB libspdk_dma.a 00:04:53.386 SO libspdk_dma.so.5.0 00:04:53.386 CC lib/util/fd_group.o 00:04:53.386 CC lib/util/file.o 00:04:53.386 LIB libspdk_ioat.a 00:04:53.386 CC lib/util/hexlify.o 00:04:53.386 SO libspdk_ioat.so.7.0 00:04:53.386 SYMLINK libspdk_dma.so 00:04:53.386 CC lib/util/iov.o 00:04:53.386 CC lib/util/math.o 00:04:53.386 SYMLINK libspdk_ioat.so 00:04:53.386 CC lib/util/net.o 00:04:53.386 CC lib/util/pipe.o 00:04:53.386 LIB libspdk_vfio_user.a 00:04:53.386 SO libspdk_vfio_user.so.5.0 00:04:53.386 CC lib/util/strerror_tls.o 00:04:53.386 CC lib/util/string.o 00:04:53.386 SYMLINK libspdk_vfio_user.so 00:04:53.386 CC lib/util/uuid.o 00:04:53.386 CC lib/util/xor.o 00:04:53.386 CC lib/util/zipf.o 00:04:53.386 CC lib/util/md5.o 00:04:53.386 LIB libspdk_util.a 00:04:53.386 SO libspdk_util.so.10.1 00:04:53.386 LIB libspdk_trace_parser.a 00:04:53.386 SYMLINK libspdk_util.so 00:04:53.386 SO libspdk_trace_parser.so.6.0 00:04:53.386 SYMLINK libspdk_trace_parser.so 00:04:53.386 CC lib/conf/conf.o 00:04:53.386 CC lib/rdma_utils/rdma_utils.o 00:04:53.386 CC lib/json/json_parse.o 00:04:53.386 CC lib/json/json_util.o 00:04:53.386 CC lib/json/json_write.o 00:04:53.386 CC lib/env_dpdk/env.o 00:04:53.386 CC lib/vmd/vmd.o 00:04:53.386 CC lib/vmd/led.o 00:04:53.386 CC lib/env_dpdk/memory.o 00:04:53.386 CC lib/idxd/idxd.o 00:04:53.645 CC lib/idxd/idxd_user.o 00:04:53.645 LIB libspdk_conf.a 00:04:53.645 CC lib/env_dpdk/pci.o 00:04:53.645 CC lib/idxd/idxd_kernel.o 00:04:53.645 SO libspdk_conf.so.6.0 00:04:53.645 LIB libspdk_rdma_utils.a 00:04:53.645 LIB libspdk_json.a 00:04:53.645 SYMLINK libspdk_conf.so 00:04:53.645 SO libspdk_rdma_utils.so.1.0 00:04:53.645 CC lib/env_dpdk/init.o 00:04:53.645 SO libspdk_json.so.6.0 00:04:53.645 SYMLINK libspdk_rdma_utils.so 00:04:53.904 CC lib/env_dpdk/threads.o 00:04:53.904 SYMLINK libspdk_json.so 00:04:53.904 CC lib/env_dpdk/pci_ioat.o 00:04:53.904 CC lib/env_dpdk/pci_virtio.o 00:04:53.904 CC lib/env_dpdk/pci_vmd.o 00:04:53.904 CC lib/env_dpdk/pci_idxd.o 00:04:53.904 CC lib/env_dpdk/pci_event.o 00:04:53.904 CC lib/env_dpdk/sigbus_handler.o 00:04:53.904 CC lib/env_dpdk/pci_dpdk.o 00:04:53.904 LIB libspdk_idxd.a 00:04:54.164 SO libspdk_idxd.so.12.1 00:04:54.164 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:54.164 LIB libspdk_vmd.a 00:04:54.164 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:54.164 SO libspdk_vmd.so.6.0 00:04:54.164 SYMLINK libspdk_idxd.so 00:04:54.164 SYMLINK libspdk_vmd.so 00:04:54.423 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:54.423 CC lib/jsonrpc/jsonrpc_server.o 00:04:54.423 CC lib/jsonrpc/jsonrpc_client.o 00:04:54.423 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:54.423 CC lib/rdma_provider/common.o 00:04:54.423 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:54.682 LIB libspdk_rdma_provider.a 00:04:54.682 LIB libspdk_jsonrpc.a 00:04:54.682 SO libspdk_rdma_provider.so.7.0 00:04:54.682 SO libspdk_jsonrpc.so.6.0 00:04:54.682 SYMLINK libspdk_rdma_provider.so 00:04:54.682 SYMLINK libspdk_jsonrpc.so 00:04:54.942 LIB libspdk_env_dpdk.a 00:04:54.942 SO libspdk_env_dpdk.so.15.1 00:04:54.942 CC lib/rpc/rpc.o 00:04:55.201 SYMLINK libspdk_env_dpdk.so 00:04:55.201 LIB libspdk_rpc.a 00:04:55.201 SO libspdk_rpc.so.6.0 00:04:55.462 SYMLINK libspdk_rpc.so 00:04:55.462 CC lib/notify/notify_rpc.o 00:04:55.462 CC lib/notify/notify.o 00:04:55.462 CC lib/keyring/keyring_rpc.o 00:04:55.462 CC lib/trace/trace.o 00:04:55.462 CC lib/keyring/keyring.o 00:04:55.462 CC lib/trace/trace_rpc.o 00:04:55.462 CC lib/trace/trace_flags.o 00:04:55.722 LIB libspdk_notify.a 00:04:55.722 SO libspdk_notify.so.6.0 00:04:55.722 LIB libspdk_trace.a 00:04:55.980 LIB libspdk_keyring.a 00:04:55.980 SYMLINK libspdk_notify.so 00:04:55.980 SO libspdk_trace.so.11.0 00:04:55.980 SO libspdk_keyring.so.2.0 00:04:55.980 SYMLINK libspdk_keyring.so 00:04:55.980 SYMLINK libspdk_trace.so 00:04:56.238 CC lib/thread/iobuf.o 00:04:56.238 CC lib/thread/thread.o 00:04:56.238 CC lib/sock/sock.o 00:04:56.238 CC lib/sock/sock_rpc.o 00:04:56.806 LIB libspdk_sock.a 00:04:56.806 SO libspdk_sock.so.10.0 00:04:56.806 SYMLINK libspdk_sock.so 00:04:57.064 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:57.064 CC lib/nvme/nvme_fabric.o 00:04:57.064 CC lib/nvme/nvme_ctrlr.o 00:04:57.064 CC lib/nvme/nvme_ns_cmd.o 00:04:57.064 CC lib/nvme/nvme_ns.o 00:04:57.064 CC lib/nvme/nvme_pcie.o 00:04:57.064 CC lib/nvme/nvme_pcie_common.o 00:04:57.064 CC lib/nvme/nvme_qpair.o 00:04:57.064 CC lib/nvme/nvme.o 00:04:57.999 LIB libspdk_thread.a 00:04:58.000 CC lib/nvme/nvme_quirks.o 00:04:58.000 CC lib/nvme/nvme_transport.o 00:04:58.000 SO libspdk_thread.so.11.0 00:04:58.000 CC lib/nvme/nvme_discovery.o 00:04:58.000 SYMLINK libspdk_thread.so 00:04:58.000 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:58.000 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:58.000 CC lib/nvme/nvme_tcp.o 00:04:58.000 CC lib/nvme/nvme_opal.o 00:04:58.258 CC lib/nvme/nvme_io_msg.o 00:04:58.258 CC lib/nvme/nvme_poll_group.o 00:04:58.516 CC lib/nvme/nvme_zns.o 00:04:58.516 CC lib/nvme/nvme_stubs.o 00:04:58.516 CC lib/nvme/nvme_auth.o 00:04:58.786 CC lib/nvme/nvme_cuse.o 00:04:58.786 CC lib/accel/accel.o 00:04:58.786 CC lib/accel/accel_rpc.o 00:04:59.058 CC lib/blob/blobstore.o 00:04:59.058 CC lib/accel/accel_sw.o 00:04:59.316 CC lib/init/json_config.o 00:04:59.316 CC lib/nvme/nvme_rdma.o 00:04:59.316 CC lib/init/subsystem.o 00:04:59.575 CC lib/init/subsystem_rpc.o 00:04:59.575 CC lib/virtio/virtio.o 00:04:59.575 CC lib/fsdev/fsdev.o 00:04:59.575 CC lib/fsdev/fsdev_io.o 00:04:59.575 CC lib/fsdev/fsdev_rpc.o 00:04:59.575 CC lib/virtio/virtio_vhost_user.o 00:04:59.575 CC lib/virtio/virtio_vfio_user.o 00:04:59.575 CC lib/init/rpc.o 00:04:59.835 CC lib/virtio/virtio_pci.o 00:04:59.835 CC lib/blob/request.o 00:04:59.835 CC lib/blob/zeroes.o 00:04:59.835 LIB libspdk_init.a 00:04:59.835 LIB libspdk_accel.a 00:04:59.835 SO libspdk_init.so.6.0 00:04:59.835 CC lib/blob/blob_bs_dev.o 00:04:59.835 SO libspdk_accel.so.16.0 00:05:00.094 SYMLINK libspdk_init.so 00:05:00.094 SYMLINK libspdk_accel.so 00:05:00.094 LIB libspdk_virtio.a 00:05:00.094 SO libspdk_virtio.so.7.0 00:05:00.094 SYMLINK libspdk_virtio.so 00:05:00.094 CC lib/event/app.o 00:05:00.094 CC lib/event/reactor.o 00:05:00.094 CC lib/event/app_rpc.o 00:05:00.094 CC lib/event/scheduler_static.o 00:05:00.094 CC lib/event/log_rpc.o 00:05:00.094 LIB libspdk_fsdev.a 00:05:00.094 CC lib/bdev/bdev.o 00:05:00.094 CC lib/bdev/bdev_rpc.o 00:05:00.352 SO libspdk_fsdev.so.2.0 00:05:00.352 SYMLINK libspdk_fsdev.so 00:05:00.352 CC lib/bdev/bdev_zone.o 00:05:00.352 CC lib/bdev/part.o 00:05:00.352 CC lib/bdev/scsi_nvme.o 00:05:00.352 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:00.611 LIB libspdk_event.a 00:05:00.611 SO libspdk_event.so.14.0 00:05:00.611 LIB libspdk_nvme.a 00:05:00.611 SYMLINK libspdk_event.so 00:05:00.870 SO libspdk_nvme.so.15.0 00:05:01.129 LIB libspdk_fuse_dispatcher.a 00:05:01.129 SO libspdk_fuse_dispatcher.so.1.0 00:05:01.129 SYMLINK libspdk_nvme.so 00:05:01.129 SYMLINK libspdk_fuse_dispatcher.so 00:05:02.067 LIB libspdk_blob.a 00:05:02.067 SO libspdk_blob.so.11.0 00:05:02.067 SYMLINK libspdk_blob.so 00:05:02.327 CC lib/blobfs/blobfs.o 00:05:02.327 CC lib/blobfs/tree.o 00:05:02.327 CC lib/lvol/lvol.o 00:05:02.895 LIB libspdk_bdev.a 00:05:02.895 SO libspdk_bdev.so.17.0 00:05:02.895 SYMLINK libspdk_bdev.so 00:05:03.155 LIB libspdk_blobfs.a 00:05:03.155 CC lib/ublk/ublk.o 00:05:03.155 CC lib/nvmf/ctrlr_discovery.o 00:05:03.155 CC lib/nvmf/ctrlr_bdev.o 00:05:03.155 CC lib/nvmf/ctrlr.o 00:05:03.155 CC lib/nbd/nbd.o 00:05:03.155 CC lib/nvmf/subsystem.o 00:05:03.155 CC lib/scsi/dev.o 00:05:03.155 SO libspdk_blobfs.so.10.0 00:05:03.155 CC lib/ftl/ftl_core.o 00:05:03.155 SYMLINK libspdk_blobfs.so 00:05:03.155 CC lib/ftl/ftl_init.o 00:05:03.414 LIB libspdk_lvol.a 00:05:03.415 SO libspdk_lvol.so.10.0 00:05:03.415 CC lib/scsi/lun.o 00:05:03.415 SYMLINK libspdk_lvol.so 00:05:03.415 CC lib/nbd/nbd_rpc.o 00:05:03.415 CC lib/ftl/ftl_layout.o 00:05:03.415 CC lib/ftl/ftl_debug.o 00:05:03.415 CC lib/ftl/ftl_io.o 00:05:03.673 CC lib/ftl/ftl_sb.o 00:05:03.673 LIB libspdk_nbd.a 00:05:03.673 SO libspdk_nbd.so.7.0 00:05:03.673 CC lib/scsi/port.o 00:05:03.673 SYMLINK libspdk_nbd.so 00:05:03.673 CC lib/scsi/scsi.o 00:05:03.673 CC lib/ublk/ublk_rpc.o 00:05:03.673 CC lib/nvmf/nvmf.o 00:05:03.673 CC lib/nvmf/nvmf_rpc.o 00:05:03.673 CC lib/ftl/ftl_l2p.o 00:05:03.673 CC lib/ftl/ftl_l2p_flat.o 00:05:03.673 CC lib/nvmf/transport.o 00:05:03.932 CC lib/nvmf/tcp.o 00:05:03.932 CC lib/scsi/scsi_bdev.o 00:05:03.932 LIB libspdk_ublk.a 00:05:03.932 SO libspdk_ublk.so.3.0 00:05:03.932 CC lib/nvmf/stubs.o 00:05:03.932 SYMLINK libspdk_ublk.so 00:05:03.932 CC lib/scsi/scsi_pr.o 00:05:03.932 CC lib/ftl/ftl_nv_cache.o 00:05:04.500 CC lib/ftl/ftl_band.o 00:05:04.500 CC lib/scsi/scsi_rpc.o 00:05:04.500 CC lib/nvmf/mdns_server.o 00:05:04.500 CC lib/nvmf/rdma.o 00:05:04.500 CC lib/scsi/task.o 00:05:04.500 CC lib/nvmf/auth.o 00:05:04.500 CC lib/ftl/ftl_band_ops.o 00:05:04.500 CC lib/ftl/ftl_writer.o 00:05:04.759 LIB libspdk_scsi.a 00:05:04.759 CC lib/ftl/ftl_rq.o 00:05:04.759 SO libspdk_scsi.so.9.0 00:05:04.759 CC lib/ftl/ftl_reloc.o 00:05:04.759 CC lib/ftl/ftl_l2p_cache.o 00:05:05.017 CC lib/ftl/ftl_p2l.o 00:05:05.017 SYMLINK libspdk_scsi.so 00:05:05.017 CC lib/ftl/ftl_p2l_log.o 00:05:05.017 CC lib/ftl/mngt/ftl_mngt.o 00:05:05.017 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:05.017 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:05.277 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:05.277 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:05.277 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:05.277 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:05.277 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:05.277 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:05.277 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:05.538 CC lib/iscsi/conn.o 00:05:05.538 CC lib/iscsi/init_grp.o 00:05:05.538 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:05.538 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:05.538 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:05.538 CC lib/ftl/utils/ftl_conf.o 00:05:05.538 CC lib/vhost/vhost.o 00:05:05.538 CC lib/ftl/utils/ftl_md.o 00:05:05.538 CC lib/iscsi/iscsi.o 00:05:05.797 CC lib/iscsi/param.o 00:05:05.797 CC lib/iscsi/portal_grp.o 00:05:05.797 CC lib/ftl/utils/ftl_mempool.o 00:05:05.797 CC lib/vhost/vhost_rpc.o 00:05:06.056 CC lib/ftl/utils/ftl_bitmap.o 00:05:06.056 CC lib/ftl/utils/ftl_property.o 00:05:06.056 CC lib/vhost/vhost_scsi.o 00:05:06.056 CC lib/vhost/vhost_blk.o 00:05:06.056 CC lib/iscsi/tgt_node.o 00:05:06.056 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:06.056 CC lib/vhost/rte_vhost_user.o 00:05:06.314 CC lib/iscsi/iscsi_subsystem.o 00:05:06.314 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:06.572 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:06.572 LIB libspdk_nvmf.a 00:05:06.572 CC lib/iscsi/iscsi_rpc.o 00:05:06.572 SO libspdk_nvmf.so.20.0 00:05:06.572 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:06.572 CC lib/iscsi/task.o 00:05:06.572 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:06.830 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:06.830 SYMLINK libspdk_nvmf.so 00:05:06.830 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:06.830 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:06.830 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:06.830 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:06.830 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:06.830 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:07.089 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:07.089 LIB libspdk_iscsi.a 00:05:07.089 CC lib/ftl/base/ftl_base_dev.o 00:05:07.089 SO libspdk_iscsi.so.8.0 00:05:07.089 CC lib/ftl/base/ftl_base_bdev.o 00:05:07.089 CC lib/ftl/ftl_trace.o 00:05:07.347 SYMLINK libspdk_iscsi.so 00:05:07.347 LIB libspdk_vhost.a 00:05:07.347 SO libspdk_vhost.so.8.0 00:05:07.347 LIB libspdk_ftl.a 00:05:07.347 SYMLINK libspdk_vhost.so 00:05:07.605 SO libspdk_ftl.so.9.0 00:05:07.864 SYMLINK libspdk_ftl.so 00:05:08.436 CC module/env_dpdk/env_dpdk_rpc.o 00:05:08.436 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:08.436 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:08.436 CC module/accel/error/accel_error.o 00:05:08.436 CC module/accel/ioat/accel_ioat.o 00:05:08.436 CC module/sock/posix/posix.o 00:05:08.436 CC module/accel/dsa/accel_dsa.o 00:05:08.436 CC module/fsdev/aio/fsdev_aio.o 00:05:08.436 CC module/keyring/file/keyring.o 00:05:08.436 CC module/blob/bdev/blob_bdev.o 00:05:08.436 LIB libspdk_env_dpdk_rpc.a 00:05:08.436 SO libspdk_env_dpdk_rpc.so.6.0 00:05:08.436 SYMLINK libspdk_env_dpdk_rpc.so 00:05:08.436 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:08.436 LIB libspdk_scheduler_dpdk_governor.a 00:05:08.701 CC module/keyring/file/keyring_rpc.o 00:05:08.701 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:08.701 LIB libspdk_scheduler_dynamic.a 00:05:08.701 CC module/accel/error/accel_error_rpc.o 00:05:08.701 CC module/accel/ioat/accel_ioat_rpc.o 00:05:08.701 SO libspdk_scheduler_dynamic.so.4.0 00:05:08.701 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:08.701 CC module/accel/dsa/accel_dsa_rpc.o 00:05:08.701 SYMLINK libspdk_scheduler_dynamic.so 00:05:08.701 LIB libspdk_blob_bdev.a 00:05:08.701 CC module/fsdev/aio/linux_aio_mgr.o 00:05:08.701 LIB libspdk_keyring_file.a 00:05:08.701 SO libspdk_blob_bdev.so.11.0 00:05:08.701 LIB libspdk_accel_error.a 00:05:08.701 SO libspdk_keyring_file.so.2.0 00:05:08.701 LIB libspdk_accel_ioat.a 00:05:08.701 SO libspdk_accel_error.so.2.0 00:05:08.701 SO libspdk_accel_ioat.so.6.0 00:05:08.701 SYMLINK libspdk_blob_bdev.so 00:05:08.960 LIB libspdk_accel_dsa.a 00:05:08.960 CC module/keyring/linux/keyring.o 00:05:08.960 SYMLINK libspdk_keyring_file.so 00:05:08.960 SO libspdk_accel_dsa.so.5.0 00:05:08.960 SYMLINK libspdk_accel_ioat.so 00:05:08.960 CC module/keyring/linux/keyring_rpc.o 00:05:08.960 SYMLINK libspdk_accel_error.so 00:05:08.960 CC module/scheduler/gscheduler/gscheduler.o 00:05:08.960 SYMLINK libspdk_accel_dsa.so 00:05:08.960 CC module/accel/iaa/accel_iaa.o 00:05:08.960 LIB libspdk_keyring_linux.a 00:05:08.960 SO libspdk_keyring_linux.so.1.0 00:05:08.960 CC module/sock/uring/uring.o 00:05:08.960 LIB libspdk_fsdev_aio.a 00:05:09.220 LIB libspdk_scheduler_gscheduler.a 00:05:09.220 SO libspdk_fsdev_aio.so.1.0 00:05:09.220 SO libspdk_scheduler_gscheduler.so.4.0 00:05:09.220 SYMLINK libspdk_keyring_linux.so 00:05:09.220 LIB libspdk_sock_posix.a 00:05:09.220 CC module/bdev/delay/vbdev_delay.o 00:05:09.220 CC module/bdev/error/vbdev_error.o 00:05:09.220 SYMLINK libspdk_scheduler_gscheduler.so 00:05:09.220 CC module/accel/iaa/accel_iaa_rpc.o 00:05:09.220 CC module/bdev/error/vbdev_error_rpc.o 00:05:09.220 CC module/bdev/gpt/gpt.o 00:05:09.220 SYMLINK libspdk_fsdev_aio.so 00:05:09.220 SO libspdk_sock_posix.so.6.0 00:05:09.220 CC module/bdev/gpt/vbdev_gpt.o 00:05:09.220 CC module/blobfs/bdev/blobfs_bdev.o 00:05:09.220 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:09.220 SYMLINK libspdk_sock_posix.so 00:05:09.220 LIB libspdk_accel_iaa.a 00:05:09.479 SO libspdk_accel_iaa.so.3.0 00:05:09.479 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:09.479 SYMLINK libspdk_accel_iaa.so 00:05:09.479 LIB libspdk_blobfs_bdev.a 00:05:09.479 CC module/bdev/lvol/vbdev_lvol.o 00:05:09.479 SO libspdk_blobfs_bdev.so.6.0 00:05:09.479 LIB libspdk_bdev_error.a 00:05:09.479 LIB libspdk_bdev_gpt.a 00:05:09.479 SO libspdk_bdev_error.so.6.0 00:05:09.479 SO libspdk_bdev_gpt.so.6.0 00:05:09.479 CC module/bdev/malloc/bdev_malloc.o 00:05:09.479 SYMLINK libspdk_blobfs_bdev.so 00:05:09.479 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:09.479 SYMLINK libspdk_bdev_error.so 00:05:09.479 LIB libspdk_bdev_delay.a 00:05:09.479 SYMLINK libspdk_bdev_gpt.so 00:05:09.479 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:09.479 CC module/bdev/nvme/bdev_nvme.o 00:05:09.479 CC module/bdev/null/bdev_null.o 00:05:09.739 SO libspdk_bdev_delay.so.6.0 00:05:09.739 SYMLINK libspdk_bdev_delay.so 00:05:09.739 CC module/bdev/null/bdev_null_rpc.o 00:05:09.739 CC module/bdev/passthru/vbdev_passthru.o 00:05:09.739 CC module/bdev/raid/bdev_raid.o 00:05:09.739 CC module/bdev/raid/bdev_raid_rpc.o 00:05:09.739 LIB libspdk_sock_uring.a 00:05:09.739 SO libspdk_sock_uring.so.5.0 00:05:09.998 SYMLINK libspdk_sock_uring.so 00:05:09.998 CC module/bdev/raid/bdev_raid_sb.o 00:05:09.998 CC module/bdev/raid/raid0.o 00:05:09.998 LIB libspdk_bdev_null.a 00:05:09.998 LIB libspdk_bdev_malloc.a 00:05:09.998 SO libspdk_bdev_null.so.6.0 00:05:09.998 SO libspdk_bdev_malloc.so.6.0 00:05:09.998 CC module/bdev/raid/raid1.o 00:05:09.998 LIB libspdk_bdev_lvol.a 00:05:09.998 SYMLINK libspdk_bdev_null.so 00:05:09.998 CC module/bdev/raid/concat.o 00:05:09.998 SO libspdk_bdev_lvol.so.6.0 00:05:09.998 SYMLINK libspdk_bdev_malloc.so 00:05:09.998 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:09.998 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:09.998 SYMLINK libspdk_bdev_lvol.so 00:05:09.998 CC module/bdev/nvme/nvme_rpc.o 00:05:10.256 CC module/bdev/nvme/bdev_mdns_client.o 00:05:10.256 CC module/bdev/nvme/vbdev_opal.o 00:05:10.256 LIB libspdk_bdev_passthru.a 00:05:10.256 SO libspdk_bdev_passthru.so.6.0 00:05:10.256 CC module/bdev/split/vbdev_split.o 00:05:10.256 CC module/bdev/split/vbdev_split_rpc.o 00:05:10.256 SYMLINK libspdk_bdev_passthru.so 00:05:10.256 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:10.256 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:10.515 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:10.515 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:10.515 CC module/bdev/uring/bdev_uring.o 00:05:10.515 LIB libspdk_bdev_split.a 00:05:10.515 SO libspdk_bdev_split.so.6.0 00:05:10.515 CC module/bdev/uring/bdev_uring_rpc.o 00:05:10.515 SYMLINK libspdk_bdev_split.so 00:05:10.515 CC module/bdev/aio/bdev_aio.o 00:05:10.773 CC module/bdev/aio/bdev_aio_rpc.o 00:05:10.773 CC module/bdev/ftl/bdev_ftl.o 00:05:10.773 CC module/bdev/iscsi/bdev_iscsi.o 00:05:10.773 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:10.773 LIB libspdk_bdev_raid.a 00:05:10.773 LIB libspdk_bdev_zone_block.a 00:05:10.773 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:10.773 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:10.773 SO libspdk_bdev_zone_block.so.6.0 00:05:10.773 SO libspdk_bdev_raid.so.6.0 00:05:11.031 SYMLINK libspdk_bdev_zone_block.so 00:05:11.031 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:11.031 LIB libspdk_bdev_uring.a 00:05:11.031 SYMLINK libspdk_bdev_raid.so 00:05:11.031 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:11.031 LIB libspdk_bdev_aio.a 00:05:11.031 SO libspdk_bdev_uring.so.6.0 00:05:11.031 SO libspdk_bdev_aio.so.6.0 00:05:11.031 SYMLINK libspdk_bdev_uring.so 00:05:11.031 SYMLINK libspdk_bdev_aio.so 00:05:11.031 LIB libspdk_bdev_ftl.a 00:05:11.031 SO libspdk_bdev_ftl.so.6.0 00:05:11.031 LIB libspdk_bdev_iscsi.a 00:05:11.289 SYMLINK libspdk_bdev_ftl.so 00:05:11.289 SO libspdk_bdev_iscsi.so.6.0 00:05:11.289 SYMLINK libspdk_bdev_iscsi.so 00:05:11.289 LIB libspdk_bdev_virtio.a 00:05:11.289 SO libspdk_bdev_virtio.so.6.0 00:05:11.546 SYMLINK libspdk_bdev_virtio.so 00:05:12.126 LIB libspdk_bdev_nvme.a 00:05:12.384 SO libspdk_bdev_nvme.so.7.1 00:05:12.384 SYMLINK libspdk_bdev_nvme.so 00:05:12.950 CC module/event/subsystems/keyring/keyring.o 00:05:12.950 CC module/event/subsystems/iobuf/iobuf.o 00:05:12.950 CC module/event/subsystems/sock/sock.o 00:05:12.950 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:12.950 CC module/event/subsystems/fsdev/fsdev.o 00:05:12.950 CC module/event/subsystems/scheduler/scheduler.o 00:05:12.950 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:12.950 CC module/event/subsystems/vmd/vmd.o 00:05:12.950 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:12.950 LIB libspdk_event_keyring.a 00:05:12.950 LIB libspdk_event_sock.a 00:05:12.950 LIB libspdk_event_vhost_blk.a 00:05:12.950 SO libspdk_event_keyring.so.1.0 00:05:12.950 LIB libspdk_event_fsdev.a 00:05:12.950 LIB libspdk_event_vmd.a 00:05:12.950 SO libspdk_event_sock.so.5.0 00:05:12.950 SO libspdk_event_vhost_blk.so.3.0 00:05:12.950 LIB libspdk_event_scheduler.a 00:05:12.950 LIB libspdk_event_iobuf.a 00:05:13.209 SO libspdk_event_fsdev.so.1.0 00:05:13.209 SO libspdk_event_scheduler.so.4.0 00:05:13.209 SO libspdk_event_iobuf.so.3.0 00:05:13.209 SO libspdk_event_vmd.so.6.0 00:05:13.209 SYMLINK libspdk_event_vhost_blk.so 00:05:13.209 SYMLINK libspdk_event_sock.so 00:05:13.209 SYMLINK libspdk_event_keyring.so 00:05:13.209 SYMLINK libspdk_event_scheduler.so 00:05:13.209 SYMLINK libspdk_event_fsdev.so 00:05:13.209 SYMLINK libspdk_event_vmd.so 00:05:13.209 SYMLINK libspdk_event_iobuf.so 00:05:13.467 CC module/event/subsystems/accel/accel.o 00:05:13.725 LIB libspdk_event_accel.a 00:05:13.725 SO libspdk_event_accel.so.6.0 00:05:13.725 SYMLINK libspdk_event_accel.so 00:05:13.983 CC module/event/subsystems/bdev/bdev.o 00:05:14.242 LIB libspdk_event_bdev.a 00:05:14.242 SO libspdk_event_bdev.so.6.0 00:05:14.242 SYMLINK libspdk_event_bdev.so 00:05:14.501 CC module/event/subsystems/ublk/ublk.o 00:05:14.501 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:14.501 CC module/event/subsystems/scsi/scsi.o 00:05:14.501 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:14.501 CC module/event/subsystems/nbd/nbd.o 00:05:14.760 LIB libspdk_event_ublk.a 00:05:14.760 LIB libspdk_event_nbd.a 00:05:14.760 LIB libspdk_event_scsi.a 00:05:14.760 SO libspdk_event_ublk.so.3.0 00:05:14.760 SO libspdk_event_nbd.so.6.0 00:05:14.760 SO libspdk_event_scsi.so.6.0 00:05:14.760 SYMLINK libspdk_event_nbd.so 00:05:14.760 SYMLINK libspdk_event_ublk.so 00:05:14.760 LIB libspdk_event_nvmf.a 00:05:14.760 SYMLINK libspdk_event_scsi.so 00:05:15.019 SO libspdk_event_nvmf.so.6.0 00:05:15.019 SYMLINK libspdk_event_nvmf.so 00:05:15.019 CC module/event/subsystems/iscsi/iscsi.o 00:05:15.019 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:15.278 LIB libspdk_event_vhost_scsi.a 00:05:15.278 LIB libspdk_event_iscsi.a 00:05:15.278 SO libspdk_event_vhost_scsi.so.3.0 00:05:15.278 SO libspdk_event_iscsi.so.6.0 00:05:15.536 SYMLINK libspdk_event_vhost_scsi.so 00:05:15.536 SYMLINK libspdk_event_iscsi.so 00:05:15.536 SO libspdk.so.6.0 00:05:15.536 SYMLINK libspdk.so 00:05:15.794 CC app/trace_record/trace_record.o 00:05:15.794 TEST_HEADER include/spdk/accel.h 00:05:15.794 TEST_HEADER include/spdk/accel_module.h 00:05:15.794 TEST_HEADER include/spdk/assert.h 00:05:15.794 TEST_HEADER include/spdk/barrier.h 00:05:15.794 TEST_HEADER include/spdk/base64.h 00:05:15.794 CXX app/trace/trace.o 00:05:15.794 TEST_HEADER include/spdk/bdev.h 00:05:15.794 TEST_HEADER include/spdk/bdev_module.h 00:05:15.794 TEST_HEADER include/spdk/bdev_zone.h 00:05:15.794 TEST_HEADER include/spdk/bit_array.h 00:05:15.794 TEST_HEADER include/spdk/bit_pool.h 00:05:15.794 TEST_HEADER include/spdk/blob_bdev.h 00:05:15.794 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:15.794 TEST_HEADER include/spdk/blobfs.h 00:05:15.794 TEST_HEADER include/spdk/blob.h 00:05:15.794 TEST_HEADER include/spdk/conf.h 00:05:15.794 TEST_HEADER include/spdk/config.h 00:05:15.794 TEST_HEADER include/spdk/cpuset.h 00:05:15.794 TEST_HEADER include/spdk/crc16.h 00:05:15.794 TEST_HEADER include/spdk/crc32.h 00:05:15.794 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:15.794 TEST_HEADER include/spdk/crc64.h 00:05:15.794 TEST_HEADER include/spdk/dif.h 00:05:15.794 TEST_HEADER include/spdk/dma.h 00:05:15.794 TEST_HEADER include/spdk/endian.h 00:05:15.794 CC app/nvmf_tgt/nvmf_main.o 00:05:15.794 TEST_HEADER include/spdk/env_dpdk.h 00:05:15.794 TEST_HEADER include/spdk/env.h 00:05:15.794 TEST_HEADER include/spdk/event.h 00:05:15.794 TEST_HEADER include/spdk/fd_group.h 00:05:15.794 TEST_HEADER include/spdk/fd.h 00:05:15.794 TEST_HEADER include/spdk/file.h 00:05:16.053 TEST_HEADER include/spdk/fsdev.h 00:05:16.053 TEST_HEADER include/spdk/fsdev_module.h 00:05:16.053 TEST_HEADER include/spdk/ftl.h 00:05:16.053 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:16.053 TEST_HEADER include/spdk/gpt_spec.h 00:05:16.053 TEST_HEADER include/spdk/hexlify.h 00:05:16.053 TEST_HEADER include/spdk/histogram_data.h 00:05:16.053 TEST_HEADER include/spdk/idxd.h 00:05:16.053 CC examples/ioat/perf/perf.o 00:05:16.053 TEST_HEADER include/spdk/idxd_spec.h 00:05:16.053 TEST_HEADER include/spdk/init.h 00:05:16.053 TEST_HEADER include/spdk/ioat.h 00:05:16.053 CC test/thread/poller_perf/poller_perf.o 00:05:16.053 TEST_HEADER include/spdk/ioat_spec.h 00:05:16.053 CC examples/util/zipf/zipf.o 00:05:16.053 TEST_HEADER include/spdk/iscsi_spec.h 00:05:16.053 TEST_HEADER include/spdk/json.h 00:05:16.053 TEST_HEADER include/spdk/jsonrpc.h 00:05:16.053 TEST_HEADER include/spdk/keyring.h 00:05:16.053 TEST_HEADER include/spdk/keyring_module.h 00:05:16.053 TEST_HEADER include/spdk/likely.h 00:05:16.053 TEST_HEADER include/spdk/log.h 00:05:16.053 TEST_HEADER include/spdk/lvol.h 00:05:16.053 TEST_HEADER include/spdk/md5.h 00:05:16.053 TEST_HEADER include/spdk/memory.h 00:05:16.053 CC test/app/bdev_svc/bdev_svc.o 00:05:16.053 TEST_HEADER include/spdk/mmio.h 00:05:16.053 TEST_HEADER include/spdk/nbd.h 00:05:16.053 TEST_HEADER include/spdk/net.h 00:05:16.053 TEST_HEADER include/spdk/notify.h 00:05:16.053 TEST_HEADER include/spdk/nvme.h 00:05:16.053 TEST_HEADER include/spdk/nvme_intel.h 00:05:16.053 CC test/dma/test_dma/test_dma.o 00:05:16.053 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:16.053 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:16.053 TEST_HEADER include/spdk/nvme_spec.h 00:05:16.053 TEST_HEADER include/spdk/nvme_zns.h 00:05:16.053 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:16.053 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:16.053 TEST_HEADER include/spdk/nvmf.h 00:05:16.053 TEST_HEADER include/spdk/nvmf_spec.h 00:05:16.053 TEST_HEADER include/spdk/nvmf_transport.h 00:05:16.053 TEST_HEADER include/spdk/opal.h 00:05:16.053 TEST_HEADER include/spdk/opal_spec.h 00:05:16.053 TEST_HEADER include/spdk/pci_ids.h 00:05:16.053 TEST_HEADER include/spdk/pipe.h 00:05:16.053 TEST_HEADER include/spdk/queue.h 00:05:16.053 TEST_HEADER include/spdk/reduce.h 00:05:16.053 TEST_HEADER include/spdk/rpc.h 00:05:16.053 TEST_HEADER include/spdk/scheduler.h 00:05:16.053 TEST_HEADER include/spdk/scsi.h 00:05:16.053 TEST_HEADER include/spdk/scsi_spec.h 00:05:16.053 TEST_HEADER include/spdk/sock.h 00:05:16.053 TEST_HEADER include/spdk/stdinc.h 00:05:16.053 TEST_HEADER include/spdk/string.h 00:05:16.053 TEST_HEADER include/spdk/thread.h 00:05:16.053 TEST_HEADER include/spdk/trace.h 00:05:16.053 TEST_HEADER include/spdk/trace_parser.h 00:05:16.053 TEST_HEADER include/spdk/tree.h 00:05:16.053 TEST_HEADER include/spdk/ublk.h 00:05:16.053 TEST_HEADER include/spdk/util.h 00:05:16.053 TEST_HEADER include/spdk/uuid.h 00:05:16.053 TEST_HEADER include/spdk/version.h 00:05:16.053 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:16.053 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:16.053 TEST_HEADER include/spdk/vhost.h 00:05:16.053 TEST_HEADER include/spdk/vmd.h 00:05:16.053 TEST_HEADER include/spdk/xor.h 00:05:16.053 TEST_HEADER include/spdk/zipf.h 00:05:16.053 CXX test/cpp_headers/accel.o 00:05:16.053 LINK spdk_trace_record 00:05:16.053 LINK interrupt_tgt 00:05:16.312 LINK poller_perf 00:05:16.312 LINK nvmf_tgt 00:05:16.312 LINK zipf 00:05:16.312 LINK ioat_perf 00:05:16.312 LINK bdev_svc 00:05:16.312 CXX test/cpp_headers/accel_module.o 00:05:16.312 LINK spdk_trace 00:05:16.571 CC examples/ioat/verify/verify.o 00:05:16.571 CC test/rpc_client/rpc_client_test.o 00:05:16.571 CC test/app/histogram_perf/histogram_perf.o 00:05:16.571 CXX test/cpp_headers/assert.o 00:05:16.571 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:16.571 CC test/event/event_perf/event_perf.o 00:05:16.571 LINK test_dma 00:05:16.571 CC test/app/jsoncat/jsoncat.o 00:05:16.571 CC test/env/mem_callbacks/mem_callbacks.o 00:05:16.571 CC app/iscsi_tgt/iscsi_tgt.o 00:05:16.571 LINK verify 00:05:16.571 LINK histogram_perf 00:05:16.828 CXX test/cpp_headers/barrier.o 00:05:16.828 LINK jsoncat 00:05:16.828 LINK event_perf 00:05:16.828 LINK rpc_client_test 00:05:16.828 CXX test/cpp_headers/base64.o 00:05:16.828 CC test/env/vtophys/vtophys.o 00:05:16.828 CC test/event/reactor/reactor.o 00:05:16.828 LINK iscsi_tgt 00:05:17.088 LINK nvme_fuzz 00:05:17.088 CC test/event/reactor_perf/reactor_perf.o 00:05:17.088 CC test/event/app_repeat/app_repeat.o 00:05:17.088 CC app/spdk_tgt/spdk_tgt.o 00:05:17.088 CC examples/thread/thread/thread_ex.o 00:05:17.088 CXX test/cpp_headers/bdev.o 00:05:17.088 LINK vtophys 00:05:17.088 LINK reactor 00:05:17.088 LINK reactor_perf 00:05:17.348 LINK app_repeat 00:05:17.348 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:17.348 CC test/event/scheduler/scheduler.o 00:05:17.348 LINK spdk_tgt 00:05:17.348 CXX test/cpp_headers/bdev_module.o 00:05:17.348 LINK mem_callbacks 00:05:17.348 LINK thread 00:05:17.348 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:17.348 CC test/app/stub/stub.o 00:05:17.607 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:17.607 CC test/accel/dif/dif.o 00:05:17.607 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:17.607 LINK scheduler 00:05:17.607 CXX test/cpp_headers/bdev_zone.o 00:05:17.607 CC app/spdk_lspci/spdk_lspci.o 00:05:17.607 CC examples/sock/hello_world/hello_sock.o 00:05:17.607 LINK stub 00:05:17.607 CC examples/vmd/lsvmd/lsvmd.o 00:05:17.607 LINK env_dpdk_post_init 00:05:17.872 LINK spdk_lspci 00:05:17.872 CXX test/cpp_headers/bit_array.o 00:05:17.872 CC app/spdk_nvme_perf/perf.o 00:05:17.872 LINK lsvmd 00:05:17.872 LINK hello_sock 00:05:17.872 CXX test/cpp_headers/bit_pool.o 00:05:17.872 LINK vhost_fuzz 00:05:17.872 CC test/env/memory/memory_ut.o 00:05:18.140 CC examples/idxd/perf/perf.o 00:05:18.140 CC examples/vmd/led/led.o 00:05:18.140 CXX test/cpp_headers/blob_bdev.o 00:05:18.140 LINK dif 00:05:18.140 LINK led 00:05:18.140 CC test/blobfs/mkfs/mkfs.o 00:05:18.399 CXX test/cpp_headers/blobfs_bdev.o 00:05:18.399 CC test/nvme/aer/aer.o 00:05:18.399 CC test/lvol/esnap/esnap.o 00:05:18.399 LINK idxd_perf 00:05:18.399 CC test/env/pci/pci_ut.o 00:05:18.399 LINK mkfs 00:05:18.399 CXX test/cpp_headers/blobfs.o 00:05:18.658 LINK aer 00:05:18.658 CC test/bdev/bdevio/bdevio.o 00:05:18.658 CXX test/cpp_headers/blob.o 00:05:18.658 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:18.658 LINK spdk_nvme_perf 00:05:18.658 LINK iscsi_fuzz 00:05:18.917 CC test/nvme/reset/reset.o 00:05:18.917 CC examples/accel/perf/accel_perf.o 00:05:18.917 LINK pci_ut 00:05:18.917 CXX test/cpp_headers/conf.o 00:05:18.917 CC app/spdk_nvme_identify/identify.o 00:05:18.917 LINK hello_fsdev 00:05:18.917 CC app/spdk_nvme_discover/discovery_aer.o 00:05:18.917 LINK bdevio 00:05:19.177 CXX test/cpp_headers/config.o 00:05:19.177 CXX test/cpp_headers/cpuset.o 00:05:19.177 CXX test/cpp_headers/crc16.o 00:05:19.177 LINK reset 00:05:19.177 LINK memory_ut 00:05:19.177 LINK spdk_nvme_discover 00:05:19.177 CC test/nvme/sgl/sgl.o 00:05:19.177 CXX test/cpp_headers/crc32.o 00:05:19.437 CC test/nvme/e2edp/nvme_dp.o 00:05:19.437 LINK accel_perf 00:05:19.437 CC examples/blob/hello_world/hello_blob.o 00:05:19.437 CC examples/nvme/hello_world/hello_world.o 00:05:19.437 CC app/spdk_top/spdk_top.o 00:05:19.437 CXX test/cpp_headers/crc64.o 00:05:19.437 CC app/vhost/vhost.o 00:05:19.437 LINK sgl 00:05:19.695 CC test/nvme/overhead/overhead.o 00:05:19.695 LINK nvme_dp 00:05:19.695 CXX test/cpp_headers/dif.o 00:05:19.695 LINK hello_world 00:05:19.695 LINK hello_blob 00:05:19.695 LINK vhost 00:05:19.695 LINK spdk_nvme_identify 00:05:19.954 CXX test/cpp_headers/dma.o 00:05:19.954 CC app/spdk_dd/spdk_dd.o 00:05:19.954 CC examples/nvme/reconnect/reconnect.o 00:05:19.954 LINK overhead 00:05:19.954 CC examples/blob/cli/blobcli.o 00:05:19.954 CC test/nvme/err_injection/err_injection.o 00:05:19.954 CXX test/cpp_headers/endian.o 00:05:19.954 CC test/nvme/startup/startup.o 00:05:19.954 CC test/nvme/reserve/reserve.o 00:05:20.212 CXX test/cpp_headers/env_dpdk.o 00:05:20.212 LINK err_injection 00:05:20.212 LINK startup 00:05:20.212 LINK reconnect 00:05:20.212 LINK reserve 00:05:20.471 LINK spdk_dd 00:05:20.471 LINK spdk_top 00:05:20.471 CC examples/bdev/hello_world/hello_bdev.o 00:05:20.471 CXX test/cpp_headers/env.o 00:05:20.471 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:20.471 LINK blobcli 00:05:20.471 CC examples/nvme/arbitration/arbitration.o 00:05:20.471 CC examples/nvme/hotplug/hotplug.o 00:05:20.471 CXX test/cpp_headers/event.o 00:05:20.731 CC test/nvme/simple_copy/simple_copy.o 00:05:20.731 CC test/nvme/connect_stress/connect_stress.o 00:05:20.731 LINK hello_bdev 00:05:20.731 CXX test/cpp_headers/fd_group.o 00:05:20.731 CC app/fio/nvme/fio_plugin.o 00:05:20.731 LINK connect_stress 00:05:20.731 LINK hotplug 00:05:20.731 LINK simple_copy 00:05:20.990 CC test/nvme/boot_partition/boot_partition.o 00:05:20.990 CXX test/cpp_headers/fd.o 00:05:20.990 LINK arbitration 00:05:20.990 CXX test/cpp_headers/file.o 00:05:20.990 CXX test/cpp_headers/fsdev.o 00:05:20.990 CC examples/bdev/bdevperf/bdevperf.o 00:05:20.990 LINK nvme_manage 00:05:20.990 CXX test/cpp_headers/fsdev_module.o 00:05:20.990 LINK boot_partition 00:05:21.248 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:21.248 CC app/fio/bdev/fio_plugin.o 00:05:21.248 CXX test/cpp_headers/ftl.o 00:05:21.248 CXX test/cpp_headers/fuse_dispatcher.o 00:05:21.248 CC examples/nvme/abort/abort.o 00:05:21.248 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:21.248 LINK spdk_nvme 00:05:21.248 CC test/nvme/compliance/nvme_compliance.o 00:05:21.507 LINK cmb_copy 00:05:21.507 CXX test/cpp_headers/gpt_spec.o 00:05:21.507 LINK pmr_persistence 00:05:21.507 CC test/nvme/fused_ordering/fused_ordering.o 00:05:21.507 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:21.507 CXX test/cpp_headers/hexlify.o 00:05:21.507 LINK abort 00:05:21.507 CC test/nvme/fdp/fdp.o 00:05:21.766 LINK nvme_compliance 00:05:21.766 LINK doorbell_aers 00:05:21.766 LINK spdk_bdev 00:05:21.766 LINK fused_ordering 00:05:21.766 CC test/nvme/cuse/cuse.o 00:05:21.766 CXX test/cpp_headers/histogram_data.o 00:05:21.766 CXX test/cpp_headers/idxd.o 00:05:21.766 CXX test/cpp_headers/idxd_spec.o 00:05:21.766 LINK bdevperf 00:05:21.766 CXX test/cpp_headers/init.o 00:05:21.766 CXX test/cpp_headers/ioat.o 00:05:21.766 CXX test/cpp_headers/ioat_spec.o 00:05:22.024 CXX test/cpp_headers/iscsi_spec.o 00:05:22.024 CXX test/cpp_headers/json.o 00:05:22.024 LINK fdp 00:05:22.024 CXX test/cpp_headers/jsonrpc.o 00:05:22.024 CXX test/cpp_headers/keyring.o 00:05:22.024 CXX test/cpp_headers/keyring_module.o 00:05:22.024 CXX test/cpp_headers/likely.o 00:05:22.024 CXX test/cpp_headers/log.o 00:05:22.024 CXX test/cpp_headers/lvol.o 00:05:22.024 CXX test/cpp_headers/md5.o 00:05:22.282 CXX test/cpp_headers/memory.o 00:05:22.282 CXX test/cpp_headers/mmio.o 00:05:22.282 CXX test/cpp_headers/nbd.o 00:05:22.282 CXX test/cpp_headers/net.o 00:05:22.282 CXX test/cpp_headers/notify.o 00:05:22.282 CC examples/nvmf/nvmf/nvmf.o 00:05:22.282 CXX test/cpp_headers/nvme.o 00:05:22.282 CXX test/cpp_headers/nvme_intel.o 00:05:22.282 CXX test/cpp_headers/nvme_ocssd.o 00:05:22.282 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:22.282 CXX test/cpp_headers/nvme_spec.o 00:05:22.541 CXX test/cpp_headers/nvme_zns.o 00:05:22.541 CXX test/cpp_headers/nvmf_cmd.o 00:05:22.541 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:22.541 CXX test/cpp_headers/nvmf.o 00:05:22.541 CXX test/cpp_headers/nvmf_spec.o 00:05:22.541 CXX test/cpp_headers/nvmf_transport.o 00:05:22.541 LINK nvmf 00:05:22.541 CXX test/cpp_headers/opal.o 00:05:22.541 CXX test/cpp_headers/opal_spec.o 00:05:22.541 CXX test/cpp_headers/pci_ids.o 00:05:22.801 CXX test/cpp_headers/pipe.o 00:05:22.801 CXX test/cpp_headers/queue.o 00:05:22.801 CXX test/cpp_headers/reduce.o 00:05:22.801 CXX test/cpp_headers/rpc.o 00:05:22.801 CXX test/cpp_headers/scheduler.o 00:05:22.801 CXX test/cpp_headers/scsi.o 00:05:22.801 CXX test/cpp_headers/scsi_spec.o 00:05:22.801 CXX test/cpp_headers/sock.o 00:05:22.801 CXX test/cpp_headers/stdinc.o 00:05:22.801 CXX test/cpp_headers/string.o 00:05:22.801 CXX test/cpp_headers/thread.o 00:05:22.801 CXX test/cpp_headers/trace.o 00:05:23.060 CXX test/cpp_headers/trace_parser.o 00:05:23.060 CXX test/cpp_headers/tree.o 00:05:23.060 CXX test/cpp_headers/ublk.o 00:05:23.060 CXX test/cpp_headers/util.o 00:05:23.060 CXX test/cpp_headers/uuid.o 00:05:23.060 CXX test/cpp_headers/version.o 00:05:23.060 CXX test/cpp_headers/vfio_user_pci.o 00:05:23.060 CXX test/cpp_headers/vfio_user_spec.o 00:05:23.060 CXX test/cpp_headers/vhost.o 00:05:23.060 CXX test/cpp_headers/vmd.o 00:05:23.060 LINK cuse 00:05:23.060 CXX test/cpp_headers/xor.o 00:05:23.060 CXX test/cpp_headers/zipf.o 00:05:23.627 LINK esnap 00:05:23.885 00:05:23.886 real 1m28.887s 00:05:23.886 user 8m11.734s 00:05:23.886 sys 1m42.062s 00:05:24.145 ************************************ 00:05:24.145 END TEST make 00:05:24.145 ************************************ 00:05:24.145 09:59:37 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:24.145 09:59:37 make -- common/autotest_common.sh@10 -- $ set +x 00:05:24.145 09:59:37 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:24.145 09:59:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:24.145 09:59:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:24.145 09:59:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:24.145 09:59:37 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:24.145 09:59:37 -- pm/common@44 -- $ pid=5253 00:05:24.145 09:59:37 -- pm/common@50 -- $ kill -TERM 5253 00:05:24.145 09:59:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:24.145 09:59:37 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:24.145 09:59:37 -- pm/common@44 -- $ pid=5255 00:05:24.145 09:59:37 -- pm/common@50 -- $ kill -TERM 5255 00:05:24.145 09:59:37 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:24.145 09:59:37 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:24.145 09:59:37 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:24.145 09:59:37 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:24.145 09:59:37 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:24.145 09:59:37 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:24.145 09:59:37 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.145 09:59:37 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.145 09:59:37 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.145 09:59:37 -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.145 09:59:37 -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.145 09:59:37 -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.145 09:59:37 -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.145 09:59:37 -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.145 09:59:37 -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.145 09:59:37 -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.145 09:59:37 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.145 09:59:37 -- scripts/common.sh@344 -- # case "$op" in 00:05:24.145 09:59:37 -- scripts/common.sh@345 -- # : 1 00:05:24.145 09:59:37 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.145 09:59:37 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.145 09:59:37 -- scripts/common.sh@365 -- # decimal 1 00:05:24.145 09:59:37 -- scripts/common.sh@353 -- # local d=1 00:05:24.145 09:59:37 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.145 09:59:37 -- scripts/common.sh@355 -- # echo 1 00:05:24.145 09:59:37 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.145 09:59:37 -- scripts/common.sh@366 -- # decimal 2 00:05:24.145 09:59:37 -- scripts/common.sh@353 -- # local d=2 00:05:24.145 09:59:37 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.145 09:59:37 -- scripts/common.sh@355 -- # echo 2 00:05:24.145 09:59:37 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.145 09:59:37 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.145 09:59:37 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.145 09:59:37 -- scripts/common.sh@368 -- # return 0 00:05:24.145 09:59:37 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.145 09:59:37 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:24.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.145 --rc genhtml_branch_coverage=1 00:05:24.145 --rc genhtml_function_coverage=1 00:05:24.145 --rc genhtml_legend=1 00:05:24.145 --rc geninfo_all_blocks=1 00:05:24.145 --rc geninfo_unexecuted_blocks=1 00:05:24.145 00:05:24.145 ' 00:05:24.145 09:59:37 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:24.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.145 --rc genhtml_branch_coverage=1 00:05:24.145 --rc genhtml_function_coverage=1 00:05:24.145 --rc genhtml_legend=1 00:05:24.145 --rc geninfo_all_blocks=1 00:05:24.145 --rc geninfo_unexecuted_blocks=1 00:05:24.145 00:05:24.145 ' 00:05:24.145 09:59:37 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:24.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.145 --rc genhtml_branch_coverage=1 00:05:24.145 --rc genhtml_function_coverage=1 00:05:24.145 --rc genhtml_legend=1 00:05:24.145 --rc geninfo_all_blocks=1 00:05:24.145 --rc geninfo_unexecuted_blocks=1 00:05:24.145 00:05:24.145 ' 00:05:24.145 09:59:37 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:24.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.145 --rc genhtml_branch_coverage=1 00:05:24.145 --rc genhtml_function_coverage=1 00:05:24.145 --rc genhtml_legend=1 00:05:24.145 --rc geninfo_all_blocks=1 00:05:24.145 --rc geninfo_unexecuted_blocks=1 00:05:24.145 00:05:24.145 ' 00:05:24.145 09:59:37 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:24.145 09:59:38 -- nvmf/common.sh@7 -- # uname -s 00:05:24.145 09:59:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.145 09:59:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.145 09:59:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.145 09:59:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.145 09:59:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.145 09:59:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.145 09:59:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.145 09:59:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.145 09:59:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.145 09:59:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.145 09:59:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:05:24.145 09:59:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:05:24.145 09:59:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.145 09:59:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.145 09:59:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:24.145 09:59:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.145 09:59:38 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:24.145 09:59:38 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:24.145 09:59:38 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.145 09:59:38 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.145 09:59:38 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.146 09:59:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.146 09:59:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.146 09:59:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.146 09:59:38 -- paths/export.sh@5 -- # export PATH 00:05:24.146 09:59:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.146 09:59:38 -- nvmf/common.sh@51 -- # : 0 00:05:24.146 09:59:38 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:24.146 09:59:38 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:24.146 09:59:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.146 09:59:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.146 09:59:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.146 09:59:38 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:24.146 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:24.146 09:59:38 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:24.146 09:59:38 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:24.405 09:59:38 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:24.405 09:59:38 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:24.405 09:59:38 -- spdk/autotest.sh@32 -- # uname -s 00:05:24.405 09:59:38 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:24.405 09:59:38 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:24.405 09:59:38 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:24.405 09:59:38 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:24.405 09:59:38 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:24.405 09:59:38 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:24.405 09:59:38 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:24.405 09:59:38 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:24.405 09:59:38 -- spdk/autotest.sh@48 -- # udevadm_pid=54362 00:05:24.405 09:59:38 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:24.405 09:59:38 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:24.405 09:59:38 -- pm/common@17 -- # local monitor 00:05:24.405 09:59:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:24.405 09:59:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:24.405 09:59:38 -- pm/common@21 -- # date +%s 00:05:24.405 09:59:38 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732010378 00:05:24.405 09:59:38 -- pm/common@21 -- # date +%s 00:05:24.405 09:59:38 -- pm/common@25 -- # sleep 1 00:05:24.405 09:59:38 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732010378 00:05:24.405 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732010378_collect-cpu-load.pm.log 00:05:24.405 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732010378_collect-vmstat.pm.log 00:05:25.342 09:59:39 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:25.342 09:59:39 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:25.342 09:59:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:25.342 09:59:39 -- common/autotest_common.sh@10 -- # set +x 00:05:25.343 09:59:39 -- spdk/autotest.sh@59 -- # create_test_list 00:05:25.343 09:59:39 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:25.343 09:59:39 -- common/autotest_common.sh@10 -- # set +x 00:05:25.343 09:59:39 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:25.343 09:59:39 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:25.343 09:59:39 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:25.343 09:59:39 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:25.343 09:59:39 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:25.343 09:59:39 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:25.343 09:59:39 -- common/autotest_common.sh@1457 -- # uname 00:05:25.343 09:59:39 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:25.343 09:59:39 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:25.343 09:59:39 -- common/autotest_common.sh@1477 -- # uname 00:05:25.343 09:59:39 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:25.343 09:59:39 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:25.343 09:59:39 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:25.601 lcov: LCOV version 1.15 00:05:25.601 09:59:39 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:40.485 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:40.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:55.367 10:00:09 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:55.367 10:00:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.367 10:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:55.367 10:00:09 -- spdk/autotest.sh@78 -- # rm -f 00:05:55.367 10:00:09 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:55.935 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:56.194 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:56.194 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:56.194 10:00:09 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:56.194 10:00:09 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:56.194 10:00:09 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:56.194 10:00:09 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:56.194 10:00:09 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:56.194 10:00:09 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:56.194 10:00:09 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:56.194 10:00:09 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:56.194 10:00:09 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:56.194 10:00:09 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:56.194 10:00:09 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:56.194 10:00:09 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:56.194 10:00:09 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:56.194 10:00:09 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:56.194 10:00:09 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:56.194 10:00:09 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:05:56.194 10:00:09 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:56.194 10:00:09 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:56.194 10:00:09 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:56.194 10:00:09 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:56.194 10:00:09 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:05:56.194 10:00:09 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:56.194 10:00:09 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:56.194 10:00:09 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:56.194 10:00:09 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:56.194 10:00:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:56.194 10:00:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:56.194 10:00:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:56.194 10:00:09 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:56.194 10:00:09 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:56.194 No valid GPT data, bailing 00:05:56.194 10:00:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:56.194 10:00:09 -- scripts/common.sh@394 -- # pt= 00:05:56.194 10:00:09 -- scripts/common.sh@395 -- # return 1 00:05:56.194 10:00:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:56.194 1+0 records in 00:05:56.194 1+0 records out 00:05:56.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00455278 s, 230 MB/s 00:05:56.194 10:00:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:56.194 10:00:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:56.194 10:00:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:56.194 10:00:09 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:56.194 10:00:09 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:56.194 No valid GPT data, bailing 00:05:56.194 10:00:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:56.194 10:00:10 -- scripts/common.sh@394 -- # pt= 00:05:56.194 10:00:10 -- scripts/common.sh@395 -- # return 1 00:05:56.194 10:00:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:56.194 1+0 records in 00:05:56.194 1+0 records out 00:05:56.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0042203 s, 248 MB/s 00:05:56.194 10:00:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:56.194 10:00:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:56.194 10:00:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:56.194 10:00:10 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:56.194 10:00:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:56.454 No valid GPT data, bailing 00:05:56.454 10:00:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:56.454 10:00:10 -- scripts/common.sh@394 -- # pt= 00:05:56.454 10:00:10 -- scripts/common.sh@395 -- # return 1 00:05:56.454 10:00:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:56.454 1+0 records in 00:05:56.454 1+0 records out 00:05:56.454 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00448563 s, 234 MB/s 00:05:56.454 10:00:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:56.454 10:00:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:56.454 10:00:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:56.454 10:00:10 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:56.454 10:00:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:56.454 No valid GPT data, bailing 00:05:56.454 10:00:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:56.454 10:00:10 -- scripts/common.sh@394 -- # pt= 00:05:56.454 10:00:10 -- scripts/common.sh@395 -- # return 1 00:05:56.454 10:00:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:56.454 1+0 records in 00:05:56.454 1+0 records out 00:05:56.454 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491105 s, 214 MB/s 00:05:56.454 10:00:10 -- spdk/autotest.sh@105 -- # sync 00:05:57.022 10:00:10 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:57.022 10:00:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:57.022 10:00:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:58.929 10:00:12 -- spdk/autotest.sh@111 -- # uname -s 00:05:58.929 10:00:12 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:58.929 10:00:12 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:58.929 10:00:12 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:59.867 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:59.867 Hugepages 00:05:59.867 node hugesize free / total 00:05:59.867 node0 1048576kB 0 / 0 00:05:59.867 node0 2048kB 0 / 0 00:05:59.867 00:05:59.867 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:59.867 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:59.867 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:59.867 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:59.867 10:00:13 -- spdk/autotest.sh@117 -- # uname -s 00:05:59.867 10:00:13 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:59.867 10:00:13 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:59.867 10:00:13 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:00.434 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:00.693 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:00.693 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:00.693 10:00:14 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:01.657 10:00:15 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:01.657 10:00:15 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:01.657 10:00:15 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:01.657 10:00:15 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:01.657 10:00:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:01.657 10:00:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:01.658 10:00:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:01.658 10:00:15 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:01.658 10:00:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:01.917 10:00:15 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:01.917 10:00:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:01.917 10:00:15 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:02.176 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:02.176 Waiting for block devices as requested 00:06:02.176 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:02.435 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:02.435 10:00:16 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:02.435 10:00:16 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:02.435 10:00:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:02.435 10:00:16 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:02.435 10:00:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:02.435 10:00:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:02.435 10:00:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:02.435 10:00:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:02.435 10:00:16 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:02.435 10:00:16 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:02.435 10:00:16 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:02.435 10:00:16 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:02.435 10:00:16 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:02.435 10:00:16 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:02.435 10:00:16 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:02.435 10:00:16 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:02.435 10:00:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:02.435 10:00:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:02.435 10:00:16 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:02.435 10:00:16 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:02.435 10:00:16 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:02.435 10:00:16 -- common/autotest_common.sh@1543 -- # continue 00:06:02.435 10:00:16 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:02.435 10:00:16 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:02.435 10:00:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:02.435 10:00:16 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:02.435 10:00:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:02.435 10:00:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:02.435 10:00:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:02.435 10:00:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:02.435 10:00:16 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:02.435 10:00:16 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:02.435 10:00:16 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:02.435 10:00:16 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:02.435 10:00:16 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:02.435 10:00:16 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:02.435 10:00:16 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:02.435 10:00:16 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:02.435 10:00:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:02.435 10:00:16 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:02.435 10:00:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:02.435 10:00:16 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:02.435 10:00:16 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:02.435 10:00:16 -- common/autotest_common.sh@1543 -- # continue 00:06:02.435 10:00:16 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:02.435 10:00:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:02.435 10:00:16 -- common/autotest_common.sh@10 -- # set +x 00:06:02.435 10:00:16 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:02.435 10:00:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:02.435 10:00:16 -- common/autotest_common.sh@10 -- # set +x 00:06:02.435 10:00:16 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:03.373 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:03.373 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:03.373 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:03.373 10:00:17 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:03.373 10:00:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:03.373 10:00:17 -- common/autotest_common.sh@10 -- # set +x 00:06:03.373 10:00:17 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:03.373 10:00:17 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:03.373 10:00:17 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:03.373 10:00:17 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:03.373 10:00:17 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:03.373 10:00:17 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:03.373 10:00:17 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:03.373 10:00:17 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:03.373 10:00:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:03.373 10:00:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:03.373 10:00:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:03.373 10:00:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:03.373 10:00:17 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:03.373 10:00:17 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:03.373 10:00:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:03.373 10:00:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:03.373 10:00:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:03.373 10:00:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:03.373 10:00:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:03.373 10:00:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:03.373 10:00:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:03.373 10:00:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:03.373 10:00:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:03.373 10:00:17 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:03.373 10:00:17 -- common/autotest_common.sh@1572 -- # return 0 00:06:03.373 10:00:17 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:03.373 10:00:17 -- common/autotest_common.sh@1580 -- # return 0 00:06:03.373 10:00:17 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:03.373 10:00:17 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:03.373 10:00:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:03.373 10:00:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:03.373 10:00:17 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:03.373 10:00:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.373 10:00:17 -- common/autotest_common.sh@10 -- # set +x 00:06:03.373 10:00:17 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:06:03.373 10:00:17 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:06:03.373 10:00:17 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:06:03.373 10:00:17 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:03.373 10:00:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.373 10:00:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.373 10:00:17 -- common/autotest_common.sh@10 -- # set +x 00:06:03.373 ************************************ 00:06:03.373 START TEST env 00:06:03.373 ************************************ 00:06:03.373 10:00:17 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:03.633 * Looking for test storage... 00:06:03.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:03.633 10:00:17 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:03.633 10:00:17 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:03.633 10:00:17 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:03.633 10:00:17 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:03.633 10:00:17 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.633 10:00:17 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.633 10:00:17 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.633 10:00:17 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.633 10:00:17 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.633 10:00:17 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.633 10:00:17 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.633 10:00:17 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.633 10:00:17 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.633 10:00:17 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.633 10:00:17 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.633 10:00:17 env -- scripts/common.sh@344 -- # case "$op" in 00:06:03.633 10:00:17 env -- scripts/common.sh@345 -- # : 1 00:06:03.633 10:00:17 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.633 10:00:17 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.633 10:00:17 env -- scripts/common.sh@365 -- # decimal 1 00:06:03.633 10:00:17 env -- scripts/common.sh@353 -- # local d=1 00:06:03.633 10:00:17 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.633 10:00:17 env -- scripts/common.sh@355 -- # echo 1 00:06:03.633 10:00:17 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.633 10:00:17 env -- scripts/common.sh@366 -- # decimal 2 00:06:03.633 10:00:17 env -- scripts/common.sh@353 -- # local d=2 00:06:03.633 10:00:17 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.633 10:00:17 env -- scripts/common.sh@355 -- # echo 2 00:06:03.633 10:00:17 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.633 10:00:17 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.633 10:00:17 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.633 10:00:17 env -- scripts/common.sh@368 -- # return 0 00:06:03.633 10:00:17 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.633 10:00:17 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:03.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.633 --rc genhtml_branch_coverage=1 00:06:03.633 --rc genhtml_function_coverage=1 00:06:03.633 --rc genhtml_legend=1 00:06:03.633 --rc geninfo_all_blocks=1 00:06:03.633 --rc geninfo_unexecuted_blocks=1 00:06:03.633 00:06:03.633 ' 00:06:03.633 10:00:17 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:03.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.633 --rc genhtml_branch_coverage=1 00:06:03.633 --rc genhtml_function_coverage=1 00:06:03.633 --rc genhtml_legend=1 00:06:03.633 --rc geninfo_all_blocks=1 00:06:03.633 --rc geninfo_unexecuted_blocks=1 00:06:03.633 00:06:03.633 ' 00:06:03.633 10:00:17 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:03.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.633 --rc genhtml_branch_coverage=1 00:06:03.633 --rc genhtml_function_coverage=1 00:06:03.633 --rc genhtml_legend=1 00:06:03.633 --rc geninfo_all_blocks=1 00:06:03.633 --rc geninfo_unexecuted_blocks=1 00:06:03.633 00:06:03.633 ' 00:06:03.633 10:00:17 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:03.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.633 --rc genhtml_branch_coverage=1 00:06:03.633 --rc genhtml_function_coverage=1 00:06:03.633 --rc genhtml_legend=1 00:06:03.633 --rc geninfo_all_blocks=1 00:06:03.633 --rc geninfo_unexecuted_blocks=1 00:06:03.633 00:06:03.633 ' 00:06:03.633 10:00:17 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:03.633 10:00:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.633 10:00:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.633 10:00:17 env -- common/autotest_common.sh@10 -- # set +x 00:06:03.633 ************************************ 00:06:03.633 START TEST env_memory 00:06:03.633 ************************************ 00:06:03.633 10:00:17 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:03.633 00:06:03.633 00:06:03.633 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.633 http://cunit.sourceforge.net/ 00:06:03.633 00:06:03.633 00:06:03.633 Suite: memory 00:06:03.633 Test: alloc and free memory map ...[2024-11-19 10:00:17.488531] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:03.633 passed 00:06:03.633 Test: mem map translation ...[2024-11-19 10:00:17.520452] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:03.633 [2024-11-19 10:00:17.520688] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:03.633 [2024-11-19 10:00:17.520932] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:03.633 [2024-11-19 10:00:17.521264] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:03.892 passed 00:06:03.892 Test: mem map registration ...[2024-11-19 10:00:17.585499] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:03.892 [2024-11-19 10:00:17.585700] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:03.892 passed 00:06:03.892 Test: mem map adjacent registrations ...passed 00:06:03.892 00:06:03.892 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.892 suites 1 1 n/a 0 0 00:06:03.892 tests 4 4 4 0 0 00:06:03.892 asserts 152 152 152 0 n/a 00:06:03.892 00:06:03.892 Elapsed time = 0.222 seconds 00:06:03.892 00:06:03.892 real 0m0.243s 00:06:03.892 user 0m0.225s 00:06:03.892 sys 0m0.012s 00:06:03.892 10:00:17 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.892 10:00:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:03.892 ************************************ 00:06:03.892 END TEST env_memory 00:06:03.892 ************************************ 00:06:03.892 10:00:17 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:03.892 10:00:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.892 10:00:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.892 10:00:17 env -- common/autotest_common.sh@10 -- # set +x 00:06:03.892 ************************************ 00:06:03.892 START TEST env_vtophys 00:06:03.892 ************************************ 00:06:03.892 10:00:17 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:03.893 EAL: lib.eal log level changed from notice to debug 00:06:03.893 EAL: Detected lcore 0 as core 0 on socket 0 00:06:03.893 EAL: Detected lcore 1 as core 0 on socket 0 00:06:03.893 EAL: Detected lcore 2 as core 0 on socket 0 00:06:03.893 EAL: Detected lcore 3 as core 0 on socket 0 00:06:03.893 EAL: Detected lcore 4 as core 0 on socket 0 00:06:03.893 EAL: Detected lcore 5 as core 0 on socket 0 00:06:03.893 EAL: Detected lcore 6 as core 0 on socket 0 00:06:03.893 EAL: Detected lcore 7 as core 0 on socket 0 00:06:03.893 EAL: Detected lcore 8 as core 0 on socket 0 00:06:03.893 EAL: Detected lcore 9 as core 0 on socket 0 00:06:03.893 EAL: Maximum logical cores by configuration: 128 00:06:03.893 EAL: Detected CPU lcores: 10 00:06:03.893 EAL: Detected NUMA nodes: 1 00:06:03.893 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:03.893 EAL: Detected shared linkage of DPDK 00:06:03.893 EAL: No shared files mode enabled, IPC will be disabled 00:06:03.893 EAL: Selected IOVA mode 'PA' 00:06:03.893 EAL: Probing VFIO support... 00:06:03.893 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:03.893 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:03.893 EAL: Ask a virtual area of 0x2e000 bytes 00:06:03.893 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:03.893 EAL: Setting up physically contiguous memory... 00:06:03.893 EAL: Setting maximum number of open files to 524288 00:06:03.893 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:03.893 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:03.893 EAL: Ask a virtual area of 0x61000 bytes 00:06:03.893 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:03.893 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:03.893 EAL: Ask a virtual area of 0x400000000 bytes 00:06:03.893 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:03.893 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:03.893 EAL: Ask a virtual area of 0x61000 bytes 00:06:03.893 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:03.893 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:03.893 EAL: Ask a virtual area of 0x400000000 bytes 00:06:03.893 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:03.893 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:03.893 EAL: Ask a virtual area of 0x61000 bytes 00:06:03.893 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:04.152 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:04.153 EAL: Ask a virtual area of 0x400000000 bytes 00:06:04.153 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:04.153 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:04.153 EAL: Ask a virtual area of 0x61000 bytes 00:06:04.153 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:04.153 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:04.153 EAL: Ask a virtual area of 0x400000000 bytes 00:06:04.153 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:04.153 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:04.153 EAL: Hugepages will be freed exactly as allocated. 00:06:04.153 EAL: No shared files mode enabled, IPC is disabled 00:06:04.153 EAL: No shared files mode enabled, IPC is disabled 00:06:04.153 EAL: TSC frequency is ~2200000 KHz 00:06:04.153 EAL: Main lcore 0 is ready (tid=7fb37e8e0a00;cpuset=[0]) 00:06:04.153 EAL: Trying to obtain current memory policy. 00:06:04.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.153 EAL: Restoring previous memory policy: 0 00:06:04.153 EAL: request: mp_malloc_sync 00:06:04.153 EAL: No shared files mode enabled, IPC is disabled 00:06:04.153 EAL: Heap on socket 0 was expanded by 2MB 00:06:04.153 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:04.153 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:04.153 EAL: Mem event callback 'spdk:(nil)' registered 00:06:04.153 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:04.153 00:06:04.153 00:06:04.153 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.153 http://cunit.sourceforge.net/ 00:06:04.153 00:06:04.153 00:06:04.153 Suite: components_suite 00:06:04.153 Test: vtophys_malloc_test ...passed 00:06:04.153 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:04.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.153 EAL: Restoring previous memory policy: 4 00:06:04.153 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.153 EAL: request: mp_malloc_sync 00:06:04.153 EAL: No shared files mode enabled, IPC is disabled 00:06:04.153 EAL: Heap on socket 0 was expanded by 4MB 00:06:04.153 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.153 EAL: request: mp_malloc_sync 00:06:04.153 EAL: No shared files mode enabled, IPC is disabled 00:06:04.153 EAL: Heap on socket 0 was shrunk by 4MB 00:06:04.153 EAL: Trying to obtain current memory policy. 00:06:04.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.153 EAL: Restoring previous memory policy: 4 00:06:04.153 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.153 EAL: request: mp_malloc_sync 00:06:04.153 EAL: No shared files mode enabled, IPC is disabled 00:06:04.153 EAL: Heap on socket 0 was expanded by 6MB 00:06:04.153 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.153 EAL: request: mp_malloc_sync 00:06:04.153 EAL: No shared files mode enabled, IPC is disabled 00:06:04.153 EAL: Heap on socket 0 was shrunk by 6MB 00:06:04.153 EAL: Trying to obtain current memory policy. 00:06:04.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.153 EAL: Restoring previous memory policy: 4 00:06:04.153 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.153 EAL: request: mp_malloc_sync 00:06:04.153 EAL: No shared files mode enabled, IPC is disabled 00:06:04.153 EAL: Heap on socket 0 was expanded by 10MB 00:06:04.153 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.153 EAL: request: mp_malloc_sync 00:06:04.153 EAL: No shared files mode enabled, IPC is disabled 00:06:04.153 EAL: Heap on socket 0 was shrunk by 10MB 00:06:04.153 EAL: Trying to obtain current memory policy. 00:06:04.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.153 EAL: Restoring previous memory policy: 4 00:06:04.153 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.153 EAL: request: mp_malloc_sync 00:06:04.153 EAL: No shared files mode enabled, IPC is disabled 00:06:04.153 EAL: Heap on socket 0 was expanded by 18MB 00:06:04.153 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.153 EAL: request: mp_malloc_sync 00:06:04.153 EAL: No shared files mode enabled, IPC is disabled 00:06:04.153 EAL: Heap on socket 0 was shrunk by 18MB 00:06:04.153 EAL: Trying to obtain current memory policy. 00:06:04.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.153 EAL: Restoring previous memory policy: 4 00:06:04.153 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.153 EAL: request: mp_malloc_sync 00:06:04.153 EAL: No shared files mode enabled, IPC is disabled 00:06:04.153 EAL: Heap on socket 0 was expanded by 34MB 00:06:04.153 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.153 EAL: request: mp_malloc_sync 00:06:04.153 EAL: No shared files mode enabled, IPC is disabled 00:06:04.153 EAL: Heap on socket 0 was shrunk by 34MB 00:06:04.153 EAL: Trying to obtain current memory policy. 00:06:04.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.153 EAL: Restoring previous memory policy: 4 00:06:04.153 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.153 EAL: request: mp_malloc_sync 00:06:04.153 EAL: No shared files mode enabled, IPC is disabled 00:06:04.153 EAL: Heap on socket 0 was expanded by 66MB 00:06:04.153 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.153 EAL: request: mp_malloc_sync 00:06:04.153 EAL: No shared files mode enabled, IPC is disabled 00:06:04.153 EAL: Heap on socket 0 was shrunk by 66MB 00:06:04.153 EAL: Trying to obtain current memory policy. 00:06:04.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.153 EAL: Restoring previous memory policy: 4 00:06:04.153 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.153 EAL: request: mp_malloc_sync 00:06:04.153 EAL: No shared files mode enabled, IPC is disabled 00:06:04.153 EAL: Heap on socket 0 was expanded by 130MB 00:06:04.153 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.412 EAL: request: mp_malloc_sync 00:06:04.412 EAL: No shared files mode enabled, IPC is disabled 00:06:04.412 EAL: Heap on socket 0 was shrunk by 130MB 00:06:04.412 EAL: Trying to obtain current memory policy. 00:06:04.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.412 EAL: Restoring previous memory policy: 4 00:06:04.412 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.412 EAL: request: mp_malloc_sync 00:06:04.412 EAL: No shared files mode enabled, IPC is disabled 00:06:04.412 EAL: Heap on socket 0 was expanded by 258MB 00:06:04.412 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.412 EAL: request: mp_malloc_sync 00:06:04.412 EAL: No shared files mode enabled, IPC is disabled 00:06:04.412 EAL: Heap on socket 0 was shrunk by 258MB 00:06:04.412 EAL: Trying to obtain current memory policy. 00:06:04.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.672 EAL: Restoring previous memory policy: 4 00:06:04.672 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.672 EAL: request: mp_malloc_sync 00:06:04.672 EAL: No shared files mode enabled, IPC is disabled 00:06:04.672 EAL: Heap on socket 0 was expanded by 514MB 00:06:04.672 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.672 EAL: request: mp_malloc_sync 00:06:04.672 EAL: No shared files mode enabled, IPC is disabled 00:06:04.672 EAL: Heap on socket 0 was shrunk by 514MB 00:06:04.672 EAL: Trying to obtain current memory policy. 00:06:04.672 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.932 EAL: Restoring previous memory policy: 4 00:06:04.932 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.932 EAL: request: mp_malloc_sync 00:06:04.932 EAL: No shared files mode enabled, IPC is disabled 00:06:04.932 EAL: Heap on socket 0 was expanded by 1026MB 00:06:05.192 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.451 passed 00:06:05.451 00:06:05.451 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.451 suites 1 1 n/a 0 0 00:06:05.451 tests 2 2 2 0 0 00:06:05.451 asserts 5484 5484 5484 0 n/a 00:06:05.451 00:06:05.451 Elapsed time = 1.223 seconds 00:06:05.451 EAL: request: mp_malloc_sync 00:06:05.451 EAL: No shared files mode enabled, IPC is disabled 00:06:05.451 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:05.451 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.451 EAL: request: mp_malloc_sync 00:06:05.451 EAL: No shared files mode enabled, IPC is disabled 00:06:05.451 EAL: Heap on socket 0 was shrunk by 2MB 00:06:05.451 EAL: No shared files mode enabled, IPC is disabled 00:06:05.451 EAL: No shared files mode enabled, IPC is disabled 00:06:05.451 EAL: No shared files mode enabled, IPC is disabled 00:06:05.451 00:06:05.451 real 0m1.440s 00:06:05.451 user 0m0.798s 00:06:05.451 sys 0m0.507s 00:06:05.451 10:00:19 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.451 ************************************ 00:06:05.451 END TEST env_vtophys 00:06:05.451 ************************************ 00:06:05.451 10:00:19 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:05.451 10:00:19 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:05.451 10:00:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.451 10:00:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.451 10:00:19 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.451 ************************************ 00:06:05.451 START TEST env_pci 00:06:05.451 ************************************ 00:06:05.451 10:00:19 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:05.451 00:06:05.451 00:06:05.451 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.451 http://cunit.sourceforge.net/ 00:06:05.451 00:06:05.451 00:06:05.451 Suite: pci 00:06:05.451 Test: pci_hook ...[2024-11-19 10:00:19.246017] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56578 has claimed it 00:06:05.451 passed 00:06:05.451 00:06:05.451 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.451 suites 1 1 n/a 0 0 00:06:05.451 tests 1 1 1 0 0 00:06:05.451 asserts 25 25 25 0 n/a 00:06:05.451 00:06:05.451 Elapsed time = 0.002 seconds 00:06:05.451 EAL: Cannot find device (10000:00:01.0) 00:06:05.451 EAL: Failed to attach device on primary process 00:06:05.451 00:06:05.451 real 0m0.019s 00:06:05.451 user 0m0.011s 00:06:05.451 sys 0m0.008s 00:06:05.451 10:00:19 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.451 10:00:19 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:05.451 ************************************ 00:06:05.451 END TEST env_pci 00:06:05.451 ************************************ 00:06:05.451 10:00:19 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:05.451 10:00:19 env -- env/env.sh@15 -- # uname 00:06:05.451 10:00:19 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:05.451 10:00:19 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:05.451 10:00:19 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:05.451 10:00:19 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:05.451 10:00:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.451 10:00:19 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.451 ************************************ 00:06:05.451 START TEST env_dpdk_post_init 00:06:05.451 ************************************ 00:06:05.451 10:00:19 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:05.711 EAL: Detected CPU lcores: 10 00:06:05.711 EAL: Detected NUMA nodes: 1 00:06:05.711 EAL: Detected shared linkage of DPDK 00:06:05.711 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:05.711 EAL: Selected IOVA mode 'PA' 00:06:05.711 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:05.711 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:05.711 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:05.711 Starting DPDK initialization... 00:06:05.711 Starting SPDK post initialization... 00:06:05.711 SPDK NVMe probe 00:06:05.711 Attaching to 0000:00:10.0 00:06:05.711 Attaching to 0000:00:11.0 00:06:05.711 Attached to 0000:00:10.0 00:06:05.711 Attached to 0000:00:11.0 00:06:05.711 Cleaning up... 00:06:05.711 00:06:05.711 real 0m0.181s 00:06:05.711 user 0m0.049s 00:06:05.711 sys 0m0.032s 00:06:05.711 10:00:19 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.711 ************************************ 00:06:05.711 END TEST env_dpdk_post_init 00:06:05.711 ************************************ 00:06:05.711 10:00:19 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:05.711 10:00:19 env -- env/env.sh@26 -- # uname 00:06:05.711 10:00:19 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:05.711 10:00:19 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:05.711 10:00:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.711 10:00:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.711 10:00:19 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.711 ************************************ 00:06:05.711 START TEST env_mem_callbacks 00:06:05.711 ************************************ 00:06:05.711 10:00:19 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:05.711 EAL: Detected CPU lcores: 10 00:06:05.711 EAL: Detected NUMA nodes: 1 00:06:05.711 EAL: Detected shared linkage of DPDK 00:06:05.711 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:05.711 EAL: Selected IOVA mode 'PA' 00:06:05.970 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:05.970 00:06:05.970 00:06:05.970 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.970 http://cunit.sourceforge.net/ 00:06:05.970 00:06:05.970 00:06:05.970 Suite: memory 00:06:05.970 Test: test ... 00:06:05.970 register 0x200000200000 2097152 00:06:05.970 malloc 3145728 00:06:05.970 register 0x200000400000 4194304 00:06:05.970 buf 0x200000500000 len 3145728 PASSED 00:06:05.970 malloc 64 00:06:05.970 buf 0x2000004fff40 len 64 PASSED 00:06:05.970 malloc 4194304 00:06:05.970 register 0x200000800000 6291456 00:06:05.970 buf 0x200000a00000 len 4194304 PASSED 00:06:05.970 free 0x200000500000 3145728 00:06:05.970 free 0x2000004fff40 64 00:06:05.970 unregister 0x200000400000 4194304 PASSED 00:06:05.970 free 0x200000a00000 4194304 00:06:05.970 unregister 0x200000800000 6291456 PASSED 00:06:05.970 malloc 8388608 00:06:05.970 register 0x200000400000 10485760 00:06:05.970 buf 0x200000600000 len 8388608 PASSED 00:06:05.970 free 0x200000600000 8388608 00:06:05.970 unregister 0x200000400000 10485760 PASSED 00:06:05.970 passed 00:06:05.970 00:06:05.970 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.970 suites 1 1 n/a 0 0 00:06:05.970 tests 1 1 1 0 0 00:06:05.970 asserts 15 15 15 0 n/a 00:06:05.970 00:06:05.970 Elapsed time = 0.009 seconds 00:06:05.970 00:06:05.970 real 0m0.143s 00:06:05.970 user 0m0.015s 00:06:05.970 sys 0m0.027s 00:06:05.970 10:00:19 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.970 ************************************ 00:06:05.970 END TEST env_mem_callbacks 00:06:05.970 ************************************ 00:06:05.970 10:00:19 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:05.970 00:06:05.970 real 0m2.494s 00:06:05.970 user 0m1.315s 00:06:05.970 sys 0m0.820s 00:06:05.970 10:00:19 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.970 10:00:19 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.970 ************************************ 00:06:05.971 END TEST env 00:06:05.971 ************************************ 00:06:05.971 10:00:19 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:05.971 10:00:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.971 10:00:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.971 10:00:19 -- common/autotest_common.sh@10 -- # set +x 00:06:05.971 ************************************ 00:06:05.971 START TEST rpc 00:06:05.971 ************************************ 00:06:05.971 10:00:19 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:05.971 * Looking for test storage... 00:06:06.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:06.230 10:00:19 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.230 10:00:19 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.230 10:00:19 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.230 10:00:19 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.230 10:00:19 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.230 10:00:19 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.230 10:00:19 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.230 10:00:19 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.230 10:00:19 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.230 10:00:19 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.230 10:00:19 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.230 10:00:19 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.230 10:00:19 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.230 10:00:19 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.230 10:00:19 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.230 10:00:19 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:06.230 10:00:19 rpc -- scripts/common.sh@345 -- # : 1 00:06:06.230 10:00:19 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.230 10:00:19 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.230 10:00:19 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:06.230 10:00:19 rpc -- scripts/common.sh@353 -- # local d=1 00:06:06.230 10:00:19 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.230 10:00:19 rpc -- scripts/common.sh@355 -- # echo 1 00:06:06.230 10:00:19 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.230 10:00:19 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:06.230 10:00:19 rpc -- scripts/common.sh@353 -- # local d=2 00:06:06.230 10:00:19 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.230 10:00:19 rpc -- scripts/common.sh@355 -- # echo 2 00:06:06.230 10:00:19 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.230 10:00:19 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.230 10:00:19 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.230 10:00:19 rpc -- scripts/common.sh@368 -- # return 0 00:06:06.230 10:00:19 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.230 10:00:19 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.230 --rc genhtml_branch_coverage=1 00:06:06.230 --rc genhtml_function_coverage=1 00:06:06.230 --rc genhtml_legend=1 00:06:06.230 --rc geninfo_all_blocks=1 00:06:06.230 --rc geninfo_unexecuted_blocks=1 00:06:06.230 00:06:06.230 ' 00:06:06.230 10:00:19 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.230 --rc genhtml_branch_coverage=1 00:06:06.230 --rc genhtml_function_coverage=1 00:06:06.230 --rc genhtml_legend=1 00:06:06.230 --rc geninfo_all_blocks=1 00:06:06.230 --rc geninfo_unexecuted_blocks=1 00:06:06.230 00:06:06.230 ' 00:06:06.230 10:00:19 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.230 --rc genhtml_branch_coverage=1 00:06:06.230 --rc genhtml_function_coverage=1 00:06:06.230 --rc genhtml_legend=1 00:06:06.230 --rc geninfo_all_blocks=1 00:06:06.230 --rc geninfo_unexecuted_blocks=1 00:06:06.230 00:06:06.230 ' 00:06:06.230 10:00:19 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.230 --rc genhtml_branch_coverage=1 00:06:06.230 --rc genhtml_function_coverage=1 00:06:06.230 --rc genhtml_legend=1 00:06:06.230 --rc geninfo_all_blocks=1 00:06:06.230 --rc geninfo_unexecuted_blocks=1 00:06:06.230 00:06:06.230 ' 00:06:06.230 10:00:19 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56695 00:06:06.230 10:00:19 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:06.230 10:00:19 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.230 10:00:19 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56695 00:06:06.230 10:00:19 rpc -- common/autotest_common.sh@835 -- # '[' -z 56695 ']' 00:06:06.230 10:00:19 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.230 10:00:19 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.230 10:00:19 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.230 10:00:19 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.230 10:00:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.230 [2024-11-19 10:00:20.044583] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:06.231 [2024-11-19 10:00:20.045255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56695 ] 00:06:06.490 [2024-11-19 10:00:20.197497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.490 [2024-11-19 10:00:20.248826] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:06.490 [2024-11-19 10:00:20.248901] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56695' to capture a snapshot of events at runtime. 00:06:06.490 [2024-11-19 10:00:20.248943] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:06.490 [2024-11-19 10:00:20.248956] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:06.490 [2024-11-19 10:00:20.248967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56695 for offline analysis/debug. 00:06:06.490 [2024-11-19 10:00:20.249479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.490 [2024-11-19 10:00:20.322730] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.749 10:00:20 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.749 10:00:20 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:06.749 10:00:20 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:06.749 10:00:20 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:06.749 10:00:20 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:06.749 10:00:20 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:06.749 10:00:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.749 10:00:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.749 10:00:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.749 ************************************ 00:06:06.749 START TEST rpc_integrity 00:06:06.749 ************************************ 00:06:06.749 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:06.749 10:00:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:06.749 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.749 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.749 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.749 10:00:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:06.749 10:00:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:06.749 10:00:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:06.749 10:00:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:06.749 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.749 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.749 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.749 10:00:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:06.749 10:00:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:06.749 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.749 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.749 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.749 10:00:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:06.749 { 00:06:06.749 "name": "Malloc0", 00:06:06.749 "aliases": [ 00:06:06.749 "7e655bb3-31a3-4f5f-8514-019e54c20d2c" 00:06:06.749 ], 00:06:06.749 "product_name": "Malloc disk", 00:06:06.749 "block_size": 512, 00:06:06.749 "num_blocks": 16384, 00:06:06.749 "uuid": "7e655bb3-31a3-4f5f-8514-019e54c20d2c", 00:06:06.749 "assigned_rate_limits": { 00:06:06.749 "rw_ios_per_sec": 0, 00:06:06.749 "rw_mbytes_per_sec": 0, 00:06:06.749 "r_mbytes_per_sec": 0, 00:06:06.749 "w_mbytes_per_sec": 0 00:06:06.750 }, 00:06:06.750 "claimed": false, 00:06:06.750 "zoned": false, 00:06:06.750 "supported_io_types": { 00:06:06.750 "read": true, 00:06:06.750 "write": true, 00:06:06.750 "unmap": true, 00:06:06.750 "flush": true, 00:06:06.750 "reset": true, 00:06:06.750 "nvme_admin": false, 00:06:06.750 "nvme_io": false, 00:06:06.750 "nvme_io_md": false, 00:06:06.750 "write_zeroes": true, 00:06:06.750 "zcopy": true, 00:06:06.750 "get_zone_info": false, 00:06:06.750 "zone_management": false, 00:06:06.750 "zone_append": false, 00:06:06.750 "compare": false, 00:06:06.750 "compare_and_write": false, 00:06:06.750 "abort": true, 00:06:06.750 "seek_hole": false, 00:06:06.750 "seek_data": false, 00:06:06.750 "copy": true, 00:06:06.750 "nvme_iov_md": false 00:06:06.750 }, 00:06:06.750 "memory_domains": [ 00:06:06.750 { 00:06:06.750 "dma_device_id": "system", 00:06:06.750 "dma_device_type": 1 00:06:06.750 }, 00:06:06.750 { 00:06:06.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.750 "dma_device_type": 2 00:06:06.750 } 00:06:06.750 ], 00:06:06.750 "driver_specific": {} 00:06:06.750 } 00:06:06.750 ]' 00:06:07.010 10:00:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:07.010 10:00:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:07.010 10:00:20 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:07.010 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.010 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.010 [2024-11-19 10:00:20.694477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:07.010 [2024-11-19 10:00:20.694540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:07.010 [2024-11-19 10:00:20.694670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaa4f20 00:06:07.010 [2024-11-19 10:00:20.694700] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:07.010 [2024-11-19 10:00:20.696442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:07.010 [2024-11-19 10:00:20.696479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:07.010 Passthru0 00:06:07.010 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.010 10:00:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:07.010 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.010 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.010 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.010 10:00:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:07.010 { 00:06:07.010 "name": "Malloc0", 00:06:07.010 "aliases": [ 00:06:07.010 "7e655bb3-31a3-4f5f-8514-019e54c20d2c" 00:06:07.010 ], 00:06:07.010 "product_name": "Malloc disk", 00:06:07.010 "block_size": 512, 00:06:07.010 "num_blocks": 16384, 00:06:07.010 "uuid": "7e655bb3-31a3-4f5f-8514-019e54c20d2c", 00:06:07.010 "assigned_rate_limits": { 00:06:07.010 "rw_ios_per_sec": 0, 00:06:07.010 "rw_mbytes_per_sec": 0, 00:06:07.010 "r_mbytes_per_sec": 0, 00:06:07.010 "w_mbytes_per_sec": 0 00:06:07.010 }, 00:06:07.010 "claimed": true, 00:06:07.010 "claim_type": "exclusive_write", 00:06:07.010 "zoned": false, 00:06:07.010 "supported_io_types": { 00:06:07.010 "read": true, 00:06:07.010 "write": true, 00:06:07.010 "unmap": true, 00:06:07.010 "flush": true, 00:06:07.010 "reset": true, 00:06:07.010 "nvme_admin": false, 00:06:07.010 "nvme_io": false, 00:06:07.010 "nvme_io_md": false, 00:06:07.010 "write_zeroes": true, 00:06:07.010 "zcopy": true, 00:06:07.010 "get_zone_info": false, 00:06:07.010 "zone_management": false, 00:06:07.010 "zone_append": false, 00:06:07.010 "compare": false, 00:06:07.010 "compare_and_write": false, 00:06:07.010 "abort": true, 00:06:07.010 "seek_hole": false, 00:06:07.010 "seek_data": false, 00:06:07.010 "copy": true, 00:06:07.010 "nvme_iov_md": false 00:06:07.010 }, 00:06:07.010 "memory_domains": [ 00:06:07.010 { 00:06:07.010 "dma_device_id": "system", 00:06:07.010 "dma_device_type": 1 00:06:07.010 }, 00:06:07.010 { 00:06:07.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.010 "dma_device_type": 2 00:06:07.010 } 00:06:07.010 ], 00:06:07.010 "driver_specific": {} 00:06:07.010 }, 00:06:07.010 { 00:06:07.010 "name": "Passthru0", 00:06:07.010 "aliases": [ 00:06:07.010 "c629558c-6af0-56c6-aa4d-9e2e96a85ac8" 00:06:07.010 ], 00:06:07.010 "product_name": "passthru", 00:06:07.010 "block_size": 512, 00:06:07.010 "num_blocks": 16384, 00:06:07.010 "uuid": "c629558c-6af0-56c6-aa4d-9e2e96a85ac8", 00:06:07.010 "assigned_rate_limits": { 00:06:07.010 "rw_ios_per_sec": 0, 00:06:07.010 "rw_mbytes_per_sec": 0, 00:06:07.010 "r_mbytes_per_sec": 0, 00:06:07.010 "w_mbytes_per_sec": 0 00:06:07.010 }, 00:06:07.010 "claimed": false, 00:06:07.010 "zoned": false, 00:06:07.010 "supported_io_types": { 00:06:07.010 "read": true, 00:06:07.010 "write": true, 00:06:07.010 "unmap": true, 00:06:07.010 "flush": true, 00:06:07.010 "reset": true, 00:06:07.010 "nvme_admin": false, 00:06:07.010 "nvme_io": false, 00:06:07.010 "nvme_io_md": false, 00:06:07.010 "write_zeroes": true, 00:06:07.010 "zcopy": true, 00:06:07.010 "get_zone_info": false, 00:06:07.010 "zone_management": false, 00:06:07.010 "zone_append": false, 00:06:07.010 "compare": false, 00:06:07.010 "compare_and_write": false, 00:06:07.010 "abort": true, 00:06:07.010 "seek_hole": false, 00:06:07.010 "seek_data": false, 00:06:07.010 "copy": true, 00:06:07.010 "nvme_iov_md": false 00:06:07.010 }, 00:06:07.010 "memory_domains": [ 00:06:07.010 { 00:06:07.010 "dma_device_id": "system", 00:06:07.010 "dma_device_type": 1 00:06:07.010 }, 00:06:07.010 { 00:06:07.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.010 "dma_device_type": 2 00:06:07.010 } 00:06:07.010 ], 00:06:07.010 "driver_specific": { 00:06:07.010 "passthru": { 00:06:07.010 "name": "Passthru0", 00:06:07.010 "base_bdev_name": "Malloc0" 00:06:07.010 } 00:06:07.010 } 00:06:07.010 } 00:06:07.010 ]' 00:06:07.010 10:00:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:07.010 10:00:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:07.010 10:00:20 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:07.010 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.010 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.010 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.010 10:00:20 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:07.010 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.010 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.010 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.010 10:00:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:07.010 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.010 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.010 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.010 10:00:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:07.010 10:00:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:07.010 10:00:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:07.010 00:06:07.010 real 0m0.325s 00:06:07.010 user 0m0.219s 00:06:07.010 sys 0m0.039s 00:06:07.010 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.010 ************************************ 00:06:07.010 END TEST rpc_integrity 00:06:07.010 10:00:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.010 ************************************ 00:06:07.270 10:00:20 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:07.270 10:00:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.270 10:00:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.270 10:00:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.270 ************************************ 00:06:07.270 START TEST rpc_plugins 00:06:07.270 ************************************ 00:06:07.270 10:00:20 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:07.270 10:00:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:07.270 10:00:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.270 10:00:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:07.270 10:00:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.270 10:00:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:07.270 10:00:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:07.270 10:00:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.270 10:00:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:07.270 10:00:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.270 10:00:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:07.270 { 00:06:07.270 "name": "Malloc1", 00:06:07.270 "aliases": [ 00:06:07.270 "64fd5056-ef00-4e42-aa77-0ff918042251" 00:06:07.270 ], 00:06:07.270 "product_name": "Malloc disk", 00:06:07.270 "block_size": 4096, 00:06:07.270 "num_blocks": 256, 00:06:07.270 "uuid": "64fd5056-ef00-4e42-aa77-0ff918042251", 00:06:07.270 "assigned_rate_limits": { 00:06:07.270 "rw_ios_per_sec": 0, 00:06:07.270 "rw_mbytes_per_sec": 0, 00:06:07.270 "r_mbytes_per_sec": 0, 00:06:07.270 "w_mbytes_per_sec": 0 00:06:07.270 }, 00:06:07.270 "claimed": false, 00:06:07.270 "zoned": false, 00:06:07.270 "supported_io_types": { 00:06:07.270 "read": true, 00:06:07.270 "write": true, 00:06:07.270 "unmap": true, 00:06:07.270 "flush": true, 00:06:07.270 "reset": true, 00:06:07.270 "nvme_admin": false, 00:06:07.270 "nvme_io": false, 00:06:07.270 "nvme_io_md": false, 00:06:07.270 "write_zeroes": true, 00:06:07.270 "zcopy": true, 00:06:07.270 "get_zone_info": false, 00:06:07.270 "zone_management": false, 00:06:07.270 "zone_append": false, 00:06:07.270 "compare": false, 00:06:07.270 "compare_and_write": false, 00:06:07.270 "abort": true, 00:06:07.270 "seek_hole": false, 00:06:07.270 "seek_data": false, 00:06:07.270 "copy": true, 00:06:07.270 "nvme_iov_md": false 00:06:07.270 }, 00:06:07.270 "memory_domains": [ 00:06:07.270 { 00:06:07.270 "dma_device_id": "system", 00:06:07.270 "dma_device_type": 1 00:06:07.270 }, 00:06:07.270 { 00:06:07.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.270 "dma_device_type": 2 00:06:07.270 } 00:06:07.270 ], 00:06:07.270 "driver_specific": {} 00:06:07.270 } 00:06:07.270 ]' 00:06:07.270 10:00:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:07.270 10:00:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:07.270 10:00:21 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:07.270 10:00:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.270 10:00:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:07.270 10:00:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.270 10:00:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:07.270 10:00:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.270 10:00:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:07.270 10:00:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.270 10:00:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:07.270 10:00:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:07.270 10:00:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:07.270 00:06:07.270 real 0m0.159s 00:06:07.270 user 0m0.105s 00:06:07.270 sys 0m0.019s 00:06:07.270 10:00:21 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.270 ************************************ 00:06:07.270 END TEST rpc_plugins 00:06:07.270 ************************************ 00:06:07.270 10:00:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:07.270 10:00:21 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:07.270 10:00:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.270 10:00:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.270 10:00:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.270 ************************************ 00:06:07.270 START TEST rpc_trace_cmd_test 00:06:07.270 ************************************ 00:06:07.271 10:00:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:07.271 10:00:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:07.271 10:00:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:07.271 10:00:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.271 10:00:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:07.271 10:00:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.271 10:00:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:07.271 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56695", 00:06:07.271 "tpoint_group_mask": "0x8", 00:06:07.271 "iscsi_conn": { 00:06:07.271 "mask": "0x2", 00:06:07.271 "tpoint_mask": "0x0" 00:06:07.271 }, 00:06:07.271 "scsi": { 00:06:07.271 "mask": "0x4", 00:06:07.271 "tpoint_mask": "0x0" 00:06:07.271 }, 00:06:07.271 "bdev": { 00:06:07.271 "mask": "0x8", 00:06:07.271 "tpoint_mask": "0xffffffffffffffff" 00:06:07.271 }, 00:06:07.271 "nvmf_rdma": { 00:06:07.271 "mask": "0x10", 00:06:07.271 "tpoint_mask": "0x0" 00:06:07.271 }, 00:06:07.271 "nvmf_tcp": { 00:06:07.271 "mask": "0x20", 00:06:07.271 "tpoint_mask": "0x0" 00:06:07.271 }, 00:06:07.271 "ftl": { 00:06:07.271 "mask": "0x40", 00:06:07.271 "tpoint_mask": "0x0" 00:06:07.271 }, 00:06:07.271 "blobfs": { 00:06:07.271 "mask": "0x80", 00:06:07.271 "tpoint_mask": "0x0" 00:06:07.271 }, 00:06:07.271 "dsa": { 00:06:07.271 "mask": "0x200", 00:06:07.271 "tpoint_mask": "0x0" 00:06:07.271 }, 00:06:07.271 "thread": { 00:06:07.271 "mask": "0x400", 00:06:07.271 "tpoint_mask": "0x0" 00:06:07.271 }, 00:06:07.271 "nvme_pcie": { 00:06:07.271 "mask": "0x800", 00:06:07.271 "tpoint_mask": "0x0" 00:06:07.271 }, 00:06:07.271 "iaa": { 00:06:07.271 "mask": "0x1000", 00:06:07.271 "tpoint_mask": "0x0" 00:06:07.271 }, 00:06:07.271 "nvme_tcp": { 00:06:07.271 "mask": "0x2000", 00:06:07.271 "tpoint_mask": "0x0" 00:06:07.271 }, 00:06:07.271 "bdev_nvme": { 00:06:07.271 "mask": "0x4000", 00:06:07.271 "tpoint_mask": "0x0" 00:06:07.271 }, 00:06:07.271 "sock": { 00:06:07.271 "mask": "0x8000", 00:06:07.271 "tpoint_mask": "0x0" 00:06:07.271 }, 00:06:07.271 "blob": { 00:06:07.271 "mask": "0x10000", 00:06:07.271 "tpoint_mask": "0x0" 00:06:07.271 }, 00:06:07.271 "bdev_raid": { 00:06:07.271 "mask": "0x20000", 00:06:07.271 "tpoint_mask": "0x0" 00:06:07.271 }, 00:06:07.271 "scheduler": { 00:06:07.271 "mask": "0x40000", 00:06:07.271 "tpoint_mask": "0x0" 00:06:07.271 } 00:06:07.271 }' 00:06:07.530 10:00:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:07.530 10:00:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:07.530 10:00:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:07.530 10:00:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:07.530 10:00:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:07.530 10:00:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:07.530 10:00:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:07.530 10:00:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:07.530 10:00:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:07.790 10:00:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:07.790 00:06:07.790 real 0m0.294s 00:06:07.790 user 0m0.264s 00:06:07.790 sys 0m0.020s 00:06:07.790 10:00:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.790 ************************************ 00:06:07.790 END TEST rpc_trace_cmd_test 00:06:07.790 10:00:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:07.790 ************************************ 00:06:07.790 10:00:21 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:07.790 10:00:21 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:07.790 10:00:21 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:07.790 10:00:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.790 10:00:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.790 10:00:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.790 ************************************ 00:06:07.790 START TEST rpc_daemon_integrity 00:06:07.790 ************************************ 00:06:07.790 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:07.790 10:00:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:07.790 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.790 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.790 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.790 10:00:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:07.790 10:00:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:07.790 10:00:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:07.790 10:00:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:07.790 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.790 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.790 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.790 10:00:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:07.790 10:00:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:07.790 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.790 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.790 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.790 10:00:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:07.790 { 00:06:07.790 "name": "Malloc2", 00:06:07.790 "aliases": [ 00:06:07.790 "3f6dcf6a-9e35-48f1-a26d-04447320ad8d" 00:06:07.790 ], 00:06:07.791 "product_name": "Malloc disk", 00:06:07.791 "block_size": 512, 00:06:07.791 "num_blocks": 16384, 00:06:07.791 "uuid": "3f6dcf6a-9e35-48f1-a26d-04447320ad8d", 00:06:07.791 "assigned_rate_limits": { 00:06:07.791 "rw_ios_per_sec": 0, 00:06:07.791 "rw_mbytes_per_sec": 0, 00:06:07.791 "r_mbytes_per_sec": 0, 00:06:07.791 "w_mbytes_per_sec": 0 00:06:07.791 }, 00:06:07.791 "claimed": false, 00:06:07.791 "zoned": false, 00:06:07.791 "supported_io_types": { 00:06:07.791 "read": true, 00:06:07.791 "write": true, 00:06:07.791 "unmap": true, 00:06:07.791 "flush": true, 00:06:07.791 "reset": true, 00:06:07.791 "nvme_admin": false, 00:06:07.791 "nvme_io": false, 00:06:07.791 "nvme_io_md": false, 00:06:07.791 "write_zeroes": true, 00:06:07.791 "zcopy": true, 00:06:07.791 "get_zone_info": false, 00:06:07.791 "zone_management": false, 00:06:07.791 "zone_append": false, 00:06:07.791 "compare": false, 00:06:07.791 "compare_and_write": false, 00:06:07.791 "abort": true, 00:06:07.791 "seek_hole": false, 00:06:07.791 "seek_data": false, 00:06:07.791 "copy": true, 00:06:07.791 "nvme_iov_md": false 00:06:07.791 }, 00:06:07.791 "memory_domains": [ 00:06:07.791 { 00:06:07.791 "dma_device_id": "system", 00:06:07.791 "dma_device_type": 1 00:06:07.791 }, 00:06:07.791 { 00:06:07.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.791 "dma_device_type": 2 00:06:07.791 } 00:06:07.791 ], 00:06:07.791 "driver_specific": {} 00:06:07.791 } 00:06:07.791 ]' 00:06:07.791 10:00:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:07.791 10:00:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:07.791 10:00:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:07.791 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.791 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.791 [2024-11-19 10:00:21.638779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:07.791 [2024-11-19 10:00:21.638844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:07.791 [2024-11-19 10:00:21.638862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb8b8e0 00:06:07.791 [2024-11-19 10:00:21.638871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:07.791 [2024-11-19 10:00:21.640251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:07.791 [2024-11-19 10:00:21.640332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:07.791 Passthru0 00:06:07.791 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.791 10:00:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:07.791 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.791 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.791 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.791 10:00:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:07.791 { 00:06:07.791 "name": "Malloc2", 00:06:07.791 "aliases": [ 00:06:07.791 "3f6dcf6a-9e35-48f1-a26d-04447320ad8d" 00:06:07.791 ], 00:06:07.791 "product_name": "Malloc disk", 00:06:07.791 "block_size": 512, 00:06:07.791 "num_blocks": 16384, 00:06:07.791 "uuid": "3f6dcf6a-9e35-48f1-a26d-04447320ad8d", 00:06:07.791 "assigned_rate_limits": { 00:06:07.791 "rw_ios_per_sec": 0, 00:06:07.791 "rw_mbytes_per_sec": 0, 00:06:07.791 "r_mbytes_per_sec": 0, 00:06:07.791 "w_mbytes_per_sec": 0 00:06:07.791 }, 00:06:07.791 "claimed": true, 00:06:07.791 "claim_type": "exclusive_write", 00:06:07.791 "zoned": false, 00:06:07.791 "supported_io_types": { 00:06:07.791 "read": true, 00:06:07.791 "write": true, 00:06:07.791 "unmap": true, 00:06:07.791 "flush": true, 00:06:07.791 "reset": true, 00:06:07.791 "nvme_admin": false, 00:06:07.791 "nvme_io": false, 00:06:07.791 "nvme_io_md": false, 00:06:07.791 "write_zeroes": true, 00:06:07.791 "zcopy": true, 00:06:07.791 "get_zone_info": false, 00:06:07.791 "zone_management": false, 00:06:07.791 "zone_append": false, 00:06:07.791 "compare": false, 00:06:07.791 "compare_and_write": false, 00:06:07.791 "abort": true, 00:06:07.791 "seek_hole": false, 00:06:07.791 "seek_data": false, 00:06:07.791 "copy": true, 00:06:07.791 "nvme_iov_md": false 00:06:07.791 }, 00:06:07.791 "memory_domains": [ 00:06:07.791 { 00:06:07.791 "dma_device_id": "system", 00:06:07.791 "dma_device_type": 1 00:06:07.791 }, 00:06:07.791 { 00:06:07.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.791 "dma_device_type": 2 00:06:07.791 } 00:06:07.791 ], 00:06:07.791 "driver_specific": {} 00:06:07.791 }, 00:06:07.791 { 00:06:07.791 "name": "Passthru0", 00:06:07.791 "aliases": [ 00:06:07.791 "0e0c01c6-fad5-57f4-a2b3-d747a550f4d5" 00:06:07.791 ], 00:06:07.791 "product_name": "passthru", 00:06:07.791 "block_size": 512, 00:06:07.791 "num_blocks": 16384, 00:06:07.791 "uuid": "0e0c01c6-fad5-57f4-a2b3-d747a550f4d5", 00:06:07.791 "assigned_rate_limits": { 00:06:07.791 "rw_ios_per_sec": 0, 00:06:07.791 "rw_mbytes_per_sec": 0, 00:06:07.791 "r_mbytes_per_sec": 0, 00:06:07.791 "w_mbytes_per_sec": 0 00:06:07.791 }, 00:06:07.791 "claimed": false, 00:06:07.791 "zoned": false, 00:06:07.791 "supported_io_types": { 00:06:07.791 "read": true, 00:06:07.791 "write": true, 00:06:07.791 "unmap": true, 00:06:07.791 "flush": true, 00:06:07.791 "reset": true, 00:06:07.791 "nvme_admin": false, 00:06:07.791 "nvme_io": false, 00:06:07.791 "nvme_io_md": false, 00:06:07.791 "write_zeroes": true, 00:06:07.791 "zcopy": true, 00:06:07.791 "get_zone_info": false, 00:06:07.791 "zone_management": false, 00:06:07.791 "zone_append": false, 00:06:07.791 "compare": false, 00:06:07.791 "compare_and_write": false, 00:06:07.791 "abort": true, 00:06:07.791 "seek_hole": false, 00:06:07.791 "seek_data": false, 00:06:07.791 "copy": true, 00:06:07.791 "nvme_iov_md": false 00:06:07.791 }, 00:06:07.791 "memory_domains": [ 00:06:07.791 { 00:06:07.791 "dma_device_id": "system", 00:06:07.791 "dma_device_type": 1 00:06:07.791 }, 00:06:07.791 { 00:06:07.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.791 "dma_device_type": 2 00:06:07.791 } 00:06:07.791 ], 00:06:07.791 "driver_specific": { 00:06:07.791 "passthru": { 00:06:07.791 "name": "Passthru0", 00:06:07.791 "base_bdev_name": "Malloc2" 00:06:07.791 } 00:06:07.791 } 00:06:07.791 } 00:06:07.791 ]' 00:06:07.791 10:00:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:08.051 10:00:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:08.051 10:00:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:08.051 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.051 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.051 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.051 10:00:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:08.051 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.051 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.051 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.051 10:00:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:08.051 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.051 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.051 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.051 10:00:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:08.051 10:00:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:08.051 10:00:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:08.051 00:06:08.051 real 0m0.316s 00:06:08.051 user 0m0.213s 00:06:08.051 sys 0m0.041s 00:06:08.051 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.051 ************************************ 00:06:08.051 10:00:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.051 END TEST rpc_daemon_integrity 00:06:08.051 ************************************ 00:06:08.051 10:00:21 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:08.051 10:00:21 rpc -- rpc/rpc.sh@84 -- # killprocess 56695 00:06:08.051 10:00:21 rpc -- common/autotest_common.sh@954 -- # '[' -z 56695 ']' 00:06:08.051 10:00:21 rpc -- common/autotest_common.sh@958 -- # kill -0 56695 00:06:08.051 10:00:21 rpc -- common/autotest_common.sh@959 -- # uname 00:06:08.051 10:00:21 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.051 10:00:21 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56695 00:06:08.051 10:00:21 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.051 killing process with pid 56695 00:06:08.051 10:00:21 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.051 10:00:21 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56695' 00:06:08.051 10:00:21 rpc -- common/autotest_common.sh@973 -- # kill 56695 00:06:08.051 10:00:21 rpc -- common/autotest_common.sh@978 -- # wait 56695 00:06:08.619 00:06:08.619 real 0m2.465s 00:06:08.619 user 0m3.120s 00:06:08.619 sys 0m0.687s 00:06:08.619 10:00:22 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.619 10:00:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.619 ************************************ 00:06:08.619 END TEST rpc 00:06:08.619 ************************************ 00:06:08.619 10:00:22 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:08.619 10:00:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.619 10:00:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.619 10:00:22 -- common/autotest_common.sh@10 -- # set +x 00:06:08.619 ************************************ 00:06:08.619 START TEST skip_rpc 00:06:08.619 ************************************ 00:06:08.619 10:00:22 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:08.619 * Looking for test storage... 00:06:08.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:08.619 10:00:22 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:08.619 10:00:22 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:08.619 10:00:22 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:08.619 10:00:22 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.619 10:00:22 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:08.619 10:00:22 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.619 10:00:22 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:08.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.619 --rc genhtml_branch_coverage=1 00:06:08.619 --rc genhtml_function_coverage=1 00:06:08.619 --rc genhtml_legend=1 00:06:08.619 --rc geninfo_all_blocks=1 00:06:08.619 --rc geninfo_unexecuted_blocks=1 00:06:08.619 00:06:08.619 ' 00:06:08.619 10:00:22 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:08.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.619 --rc genhtml_branch_coverage=1 00:06:08.619 --rc genhtml_function_coverage=1 00:06:08.619 --rc genhtml_legend=1 00:06:08.619 --rc geninfo_all_blocks=1 00:06:08.619 --rc geninfo_unexecuted_blocks=1 00:06:08.619 00:06:08.619 ' 00:06:08.619 10:00:22 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:08.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.619 --rc genhtml_branch_coverage=1 00:06:08.619 --rc genhtml_function_coverage=1 00:06:08.619 --rc genhtml_legend=1 00:06:08.619 --rc geninfo_all_blocks=1 00:06:08.619 --rc geninfo_unexecuted_blocks=1 00:06:08.619 00:06:08.619 ' 00:06:08.619 10:00:22 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:08.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.619 --rc genhtml_branch_coverage=1 00:06:08.619 --rc genhtml_function_coverage=1 00:06:08.619 --rc genhtml_legend=1 00:06:08.619 --rc geninfo_all_blocks=1 00:06:08.619 --rc geninfo_unexecuted_blocks=1 00:06:08.619 00:06:08.619 ' 00:06:08.619 10:00:22 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:08.619 10:00:22 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:08.619 10:00:22 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:08.619 10:00:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.619 10:00:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.619 10:00:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.619 ************************************ 00:06:08.619 START TEST skip_rpc 00:06:08.619 ************************************ 00:06:08.619 10:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:08.619 10:00:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56894 00:06:08.619 10:00:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.619 10:00:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:08.619 10:00:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:08.879 [2024-11-19 10:00:22.548278] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:08.879 [2024-11-19 10:00:22.548382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56894 ] 00:06:08.879 [2024-11-19 10:00:22.695873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.879 [2024-11-19 10:00:22.737033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.137 [2024-11-19 10:00:22.801986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56894 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56894 ']' 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56894 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56894 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.406 killing process with pid 56894 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56894' 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56894 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56894 00:06:14.406 00:06:14.406 real 0m5.407s 00:06:14.406 user 0m5.036s 00:06:14.406 sys 0m0.289s 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.406 10:00:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.406 ************************************ 00:06:14.406 END TEST skip_rpc 00:06:14.406 ************************************ 00:06:14.406 10:00:27 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:14.406 10:00:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.406 10:00:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.406 10:00:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.406 ************************************ 00:06:14.406 START TEST skip_rpc_with_json 00:06:14.406 ************************************ 00:06:14.406 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:14.406 10:00:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:14.406 10:00:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56980 00:06:14.406 10:00:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.406 10:00:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56980 00:06:14.406 10:00:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.406 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 56980 ']' 00:06:14.406 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.406 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.406 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.406 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.406 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:14.406 [2024-11-19 10:00:28.010466] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:14.406 [2024-11-19 10:00:28.010804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56980 ] 00:06:14.406 [2024-11-19 10:00:28.162771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.406 [2024-11-19 10:00:28.219571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.665 [2024-11-19 10:00:28.295598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.665 10:00:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.665 10:00:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:14.665 10:00:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:14.665 10:00:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.665 10:00:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:14.665 [2024-11-19 10:00:28.489818] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:14.665 request: 00:06:14.665 { 00:06:14.665 "trtype": "tcp", 00:06:14.665 "method": "nvmf_get_transports", 00:06:14.665 "req_id": 1 00:06:14.665 } 00:06:14.665 Got JSON-RPC error response 00:06:14.665 response: 00:06:14.665 { 00:06:14.665 "code": -19, 00:06:14.665 "message": "No such device" 00:06:14.665 } 00:06:14.665 10:00:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:14.665 10:00:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:14.665 10:00:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.665 10:00:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:14.665 [2024-11-19 10:00:28.501899] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:14.665 10:00:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.665 10:00:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:14.665 10:00:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.665 10:00:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:14.926 10:00:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.926 10:00:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:14.926 { 00:06:14.926 "subsystems": [ 00:06:14.926 { 00:06:14.926 "subsystem": "fsdev", 00:06:14.926 "config": [ 00:06:14.926 { 00:06:14.926 "method": "fsdev_set_opts", 00:06:14.926 "params": { 00:06:14.926 "fsdev_io_pool_size": 65535, 00:06:14.926 "fsdev_io_cache_size": 256 00:06:14.926 } 00:06:14.926 } 00:06:14.926 ] 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "subsystem": "keyring", 00:06:14.926 "config": [] 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "subsystem": "iobuf", 00:06:14.926 "config": [ 00:06:14.926 { 00:06:14.926 "method": "iobuf_set_options", 00:06:14.926 "params": { 00:06:14.926 "small_pool_count": 8192, 00:06:14.926 "large_pool_count": 1024, 00:06:14.926 "small_bufsize": 8192, 00:06:14.926 "large_bufsize": 135168, 00:06:14.926 "enable_numa": false 00:06:14.926 } 00:06:14.926 } 00:06:14.926 ] 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "subsystem": "sock", 00:06:14.926 "config": [ 00:06:14.926 { 00:06:14.926 "method": "sock_set_default_impl", 00:06:14.926 "params": { 00:06:14.926 "impl_name": "uring" 00:06:14.926 } 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "method": "sock_impl_set_options", 00:06:14.926 "params": { 00:06:14.926 "impl_name": "ssl", 00:06:14.926 "recv_buf_size": 4096, 00:06:14.926 "send_buf_size": 4096, 00:06:14.926 "enable_recv_pipe": true, 00:06:14.926 "enable_quickack": false, 00:06:14.926 "enable_placement_id": 0, 00:06:14.926 "enable_zerocopy_send_server": true, 00:06:14.926 "enable_zerocopy_send_client": false, 00:06:14.926 "zerocopy_threshold": 0, 00:06:14.926 "tls_version": 0, 00:06:14.926 "enable_ktls": false 00:06:14.926 } 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "method": "sock_impl_set_options", 00:06:14.926 "params": { 00:06:14.926 "impl_name": "posix", 00:06:14.926 "recv_buf_size": 2097152, 00:06:14.926 "send_buf_size": 2097152, 00:06:14.926 "enable_recv_pipe": true, 00:06:14.926 "enable_quickack": false, 00:06:14.926 "enable_placement_id": 0, 00:06:14.926 "enable_zerocopy_send_server": true, 00:06:14.926 "enable_zerocopy_send_client": false, 00:06:14.926 "zerocopy_threshold": 0, 00:06:14.926 "tls_version": 0, 00:06:14.926 "enable_ktls": false 00:06:14.926 } 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "method": "sock_impl_set_options", 00:06:14.926 "params": { 00:06:14.926 "impl_name": "uring", 00:06:14.926 "recv_buf_size": 2097152, 00:06:14.926 "send_buf_size": 2097152, 00:06:14.926 "enable_recv_pipe": true, 00:06:14.926 "enable_quickack": false, 00:06:14.926 "enable_placement_id": 0, 00:06:14.926 "enable_zerocopy_send_server": false, 00:06:14.926 "enable_zerocopy_send_client": false, 00:06:14.926 "zerocopy_threshold": 0, 00:06:14.926 "tls_version": 0, 00:06:14.926 "enable_ktls": false 00:06:14.926 } 00:06:14.926 } 00:06:14.926 ] 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "subsystem": "vmd", 00:06:14.926 "config": [] 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "subsystem": "accel", 00:06:14.926 "config": [ 00:06:14.926 { 00:06:14.926 "method": "accel_set_options", 00:06:14.926 "params": { 00:06:14.926 "small_cache_size": 128, 00:06:14.926 "large_cache_size": 16, 00:06:14.926 "task_count": 2048, 00:06:14.926 "sequence_count": 2048, 00:06:14.926 "buf_count": 2048 00:06:14.926 } 00:06:14.926 } 00:06:14.926 ] 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "subsystem": "bdev", 00:06:14.926 "config": [ 00:06:14.926 { 00:06:14.926 "method": "bdev_set_options", 00:06:14.926 "params": { 00:06:14.926 "bdev_io_pool_size": 65535, 00:06:14.926 "bdev_io_cache_size": 256, 00:06:14.926 "bdev_auto_examine": true, 00:06:14.926 "iobuf_small_cache_size": 128, 00:06:14.926 "iobuf_large_cache_size": 16 00:06:14.926 } 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "method": "bdev_raid_set_options", 00:06:14.926 "params": { 00:06:14.926 "process_window_size_kb": 1024, 00:06:14.926 "process_max_bandwidth_mb_sec": 0 00:06:14.926 } 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "method": "bdev_iscsi_set_options", 00:06:14.926 "params": { 00:06:14.926 "timeout_sec": 30 00:06:14.926 } 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "method": "bdev_nvme_set_options", 00:06:14.926 "params": { 00:06:14.926 "action_on_timeout": "none", 00:06:14.926 "timeout_us": 0, 00:06:14.926 "timeout_admin_us": 0, 00:06:14.926 "keep_alive_timeout_ms": 10000, 00:06:14.926 "arbitration_burst": 0, 00:06:14.926 "low_priority_weight": 0, 00:06:14.926 "medium_priority_weight": 0, 00:06:14.926 "high_priority_weight": 0, 00:06:14.926 "nvme_adminq_poll_period_us": 10000, 00:06:14.926 "nvme_ioq_poll_period_us": 0, 00:06:14.926 "io_queue_requests": 0, 00:06:14.926 "delay_cmd_submit": true, 00:06:14.926 "transport_retry_count": 4, 00:06:14.926 "bdev_retry_count": 3, 00:06:14.926 "transport_ack_timeout": 0, 00:06:14.926 "ctrlr_loss_timeout_sec": 0, 00:06:14.926 "reconnect_delay_sec": 0, 00:06:14.926 "fast_io_fail_timeout_sec": 0, 00:06:14.926 "disable_auto_failback": false, 00:06:14.926 "generate_uuids": false, 00:06:14.926 "transport_tos": 0, 00:06:14.926 "nvme_error_stat": false, 00:06:14.926 "rdma_srq_size": 0, 00:06:14.926 "io_path_stat": false, 00:06:14.926 "allow_accel_sequence": false, 00:06:14.926 "rdma_max_cq_size": 0, 00:06:14.926 "rdma_cm_event_timeout_ms": 0, 00:06:14.926 "dhchap_digests": [ 00:06:14.926 "sha256", 00:06:14.926 "sha384", 00:06:14.926 "sha512" 00:06:14.926 ], 00:06:14.926 "dhchap_dhgroups": [ 00:06:14.926 "null", 00:06:14.926 "ffdhe2048", 00:06:14.926 "ffdhe3072", 00:06:14.926 "ffdhe4096", 00:06:14.926 "ffdhe6144", 00:06:14.926 "ffdhe8192" 00:06:14.926 ] 00:06:14.926 } 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "method": "bdev_nvme_set_hotplug", 00:06:14.926 "params": { 00:06:14.926 "period_us": 100000, 00:06:14.926 "enable": false 00:06:14.926 } 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "method": "bdev_wait_for_examine" 00:06:14.926 } 00:06:14.926 ] 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "subsystem": "scsi", 00:06:14.926 "config": null 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "subsystem": "scheduler", 00:06:14.926 "config": [ 00:06:14.926 { 00:06:14.926 "method": "framework_set_scheduler", 00:06:14.926 "params": { 00:06:14.926 "name": "static" 00:06:14.926 } 00:06:14.926 } 00:06:14.926 ] 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "subsystem": "vhost_scsi", 00:06:14.926 "config": [] 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "subsystem": "vhost_blk", 00:06:14.926 "config": [] 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "subsystem": "ublk", 00:06:14.926 "config": [] 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "subsystem": "nbd", 00:06:14.926 "config": [] 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "subsystem": "nvmf", 00:06:14.926 "config": [ 00:06:14.926 { 00:06:14.926 "method": "nvmf_set_config", 00:06:14.926 "params": { 00:06:14.926 "discovery_filter": "match_any", 00:06:14.926 "admin_cmd_passthru": { 00:06:14.926 "identify_ctrlr": false 00:06:14.926 }, 00:06:14.926 "dhchap_digests": [ 00:06:14.926 "sha256", 00:06:14.926 "sha384", 00:06:14.926 "sha512" 00:06:14.926 ], 00:06:14.926 "dhchap_dhgroups": [ 00:06:14.926 "null", 00:06:14.926 "ffdhe2048", 00:06:14.926 "ffdhe3072", 00:06:14.926 "ffdhe4096", 00:06:14.926 "ffdhe6144", 00:06:14.926 "ffdhe8192" 00:06:14.926 ] 00:06:14.926 } 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "method": "nvmf_set_max_subsystems", 00:06:14.926 "params": { 00:06:14.926 "max_subsystems": 1024 00:06:14.926 } 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "method": "nvmf_set_crdt", 00:06:14.926 "params": { 00:06:14.926 "crdt1": 0, 00:06:14.926 "crdt2": 0, 00:06:14.926 "crdt3": 0 00:06:14.926 } 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "method": "nvmf_create_transport", 00:06:14.926 "params": { 00:06:14.926 "trtype": "TCP", 00:06:14.926 "max_queue_depth": 128, 00:06:14.926 "max_io_qpairs_per_ctrlr": 127, 00:06:14.926 "in_capsule_data_size": 4096, 00:06:14.926 "max_io_size": 131072, 00:06:14.926 "io_unit_size": 131072, 00:06:14.926 "max_aq_depth": 128, 00:06:14.926 "num_shared_buffers": 511, 00:06:14.926 "buf_cache_size": 4294967295, 00:06:14.926 "dif_insert_or_strip": false, 00:06:14.926 "zcopy": false, 00:06:14.926 "c2h_success": true, 00:06:14.926 "sock_priority": 0, 00:06:14.926 "abort_timeout_sec": 1, 00:06:14.926 "ack_timeout": 0, 00:06:14.926 "data_wr_pool_size": 0 00:06:14.926 } 00:06:14.926 } 00:06:14.926 ] 00:06:14.926 }, 00:06:14.926 { 00:06:14.926 "subsystem": "iscsi", 00:06:14.926 "config": [ 00:06:14.926 { 00:06:14.926 "method": "iscsi_set_options", 00:06:14.926 "params": { 00:06:14.926 "node_base": "iqn.2016-06.io.spdk", 00:06:14.926 "max_sessions": 128, 00:06:14.926 "max_connections_per_session": 2, 00:06:14.926 "max_queue_depth": 64, 00:06:14.926 "default_time2wait": 2, 00:06:14.926 "default_time2retain": 20, 00:06:14.926 "first_burst_length": 8192, 00:06:14.926 "immediate_data": true, 00:06:14.926 "allow_duplicated_isid": false, 00:06:14.927 "error_recovery_level": 0, 00:06:14.927 "nop_timeout": 60, 00:06:14.927 "nop_in_interval": 30, 00:06:14.927 "disable_chap": false, 00:06:14.927 "require_chap": false, 00:06:14.927 "mutual_chap": false, 00:06:14.927 "chap_group": 0, 00:06:14.927 "max_large_datain_per_connection": 64, 00:06:14.927 "max_r2t_per_connection": 4, 00:06:14.927 "pdu_pool_size": 36864, 00:06:14.927 "immediate_data_pool_size": 16384, 00:06:14.927 "data_out_pool_size": 2048 00:06:14.927 } 00:06:14.927 } 00:06:14.927 ] 00:06:14.927 } 00:06:14.927 ] 00:06:14.927 } 00:06:14.927 10:00:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:14.927 10:00:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56980 00:06:14.927 10:00:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56980 ']' 00:06:14.927 10:00:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56980 00:06:14.927 10:00:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:14.927 10:00:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.927 10:00:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56980 00:06:14.927 killing process with pid 56980 00:06:14.927 10:00:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.927 10:00:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.927 10:00:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56980' 00:06:14.927 10:00:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56980 00:06:14.927 10:00:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56980 00:06:15.191 10:00:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57000 00:06:15.191 10:00:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:15.191 10:00:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:20.464 10:00:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57000 00:06:20.464 10:00:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57000 ']' 00:06:20.464 10:00:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57000 00:06:20.464 10:00:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:20.464 10:00:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.464 10:00:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57000 00:06:20.464 killing process with pid 57000 00:06:20.464 10:00:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.464 10:00:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.464 10:00:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57000' 00:06:20.464 10:00:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57000 00:06:20.464 10:00:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57000 00:06:20.723 10:00:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:20.723 10:00:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:20.723 00:06:20.723 real 0m6.555s 00:06:20.723 user 0m6.101s 00:06:20.723 sys 0m0.625s 00:06:20.723 10:00:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.723 ************************************ 00:06:20.723 END TEST skip_rpc_with_json 00:06:20.723 10:00:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:20.723 ************************************ 00:06:20.723 10:00:34 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:20.723 10:00:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.723 10:00:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.723 10:00:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.723 ************************************ 00:06:20.723 START TEST skip_rpc_with_delay 00:06:20.723 ************************************ 00:06:20.723 10:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:20.723 10:00:34 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:20.723 10:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:20.723 10:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:20.723 10:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.723 10:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.723 10:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.723 10:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.723 10:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.723 10:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.723 10:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.723 10:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:20.723 10:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:20.983 [2024-11-19 10:00:34.625233] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:20.983 10:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:20.983 10:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:20.983 10:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:20.983 10:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:20.983 00:06:20.983 real 0m0.101s 00:06:20.983 user 0m0.066s 00:06:20.983 sys 0m0.033s 00:06:20.983 10:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.983 ************************************ 00:06:20.983 END TEST skip_rpc_with_delay 00:06:20.983 ************************************ 00:06:20.983 10:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:20.983 10:00:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:20.983 10:00:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:20.983 10:00:34 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:20.983 10:00:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.983 10:00:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.983 10:00:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.983 ************************************ 00:06:20.983 START TEST exit_on_failed_rpc_init 00:06:20.983 ************************************ 00:06:20.983 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:20.983 10:00:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57110 00:06:20.983 10:00:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.983 10:00:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57110 00:06:20.983 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57110 ']' 00:06:20.983 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.983 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.984 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.984 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.984 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:20.984 [2024-11-19 10:00:34.777008] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:20.984 [2024-11-19 10:00:34.777137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57110 ] 00:06:21.244 [2024-11-19 10:00:34.919353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.244 [2024-11-19 10:00:34.981382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.244 [2024-11-19 10:00:35.050981] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.183 10:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.183 10:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:22.183 10:00:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.183 10:00:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:22.183 10:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:22.183 10:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:22.183 10:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:22.183 10:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.183 10:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:22.183 10:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.183 10:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:22.183 10:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.183 10:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:22.183 10:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:22.183 10:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:22.183 [2024-11-19 10:00:35.823305] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:22.183 [2024-11-19 10:00:35.823395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57128 ] 00:06:22.183 [2024-11-19 10:00:35.974109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.183 [2024-11-19 10:00:36.036108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.183 [2024-11-19 10:00:36.036230] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:22.183 [2024-11-19 10:00:36.036251] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:22.183 [2024-11-19 10:00:36.036263] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:22.443 10:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:22.443 10:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:22.443 10:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:22.443 10:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:22.443 10:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:22.443 10:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:22.443 10:00:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:22.443 10:00:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57110 00:06:22.443 10:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57110 ']' 00:06:22.443 10:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57110 00:06:22.443 10:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:22.443 10:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.443 10:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57110 00:06:22.443 10:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.443 10:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.443 killing process with pid 57110 00:06:22.443 10:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57110' 00:06:22.443 10:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57110 00:06:22.443 10:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57110 00:06:22.702 00:06:22.702 real 0m1.825s 00:06:22.702 user 0m2.092s 00:06:22.702 sys 0m0.434s 00:06:22.702 10:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.702 10:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:22.702 ************************************ 00:06:22.702 END TEST exit_on_failed_rpc_init 00:06:22.702 ************************************ 00:06:22.702 10:00:36 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:22.702 00:06:22.702 real 0m14.270s 00:06:22.702 user 0m13.478s 00:06:22.702 sys 0m1.572s 00:06:22.702 10:00:36 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.702 10:00:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.702 ************************************ 00:06:22.702 END TEST skip_rpc 00:06:22.702 ************************************ 00:06:22.962 10:00:36 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:22.962 10:00:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.962 10:00:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.962 10:00:36 -- common/autotest_common.sh@10 -- # set +x 00:06:22.962 ************************************ 00:06:22.962 START TEST rpc_client 00:06:22.962 ************************************ 00:06:22.962 10:00:36 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:22.962 * Looking for test storage... 00:06:22.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:22.962 10:00:36 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:22.962 10:00:36 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:22.962 10:00:36 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:22.962 10:00:36 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.962 10:00:36 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:22.962 10:00:36 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.962 10:00:36 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:22.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.962 --rc genhtml_branch_coverage=1 00:06:22.962 --rc genhtml_function_coverage=1 00:06:22.962 --rc genhtml_legend=1 00:06:22.962 --rc geninfo_all_blocks=1 00:06:22.962 --rc geninfo_unexecuted_blocks=1 00:06:22.962 00:06:22.962 ' 00:06:22.962 10:00:36 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:22.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.962 --rc genhtml_branch_coverage=1 00:06:22.962 --rc genhtml_function_coverage=1 00:06:22.962 --rc genhtml_legend=1 00:06:22.962 --rc geninfo_all_blocks=1 00:06:22.962 --rc geninfo_unexecuted_blocks=1 00:06:22.962 00:06:22.962 ' 00:06:22.962 10:00:36 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:22.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.962 --rc genhtml_branch_coverage=1 00:06:22.962 --rc genhtml_function_coverage=1 00:06:22.962 --rc genhtml_legend=1 00:06:22.962 --rc geninfo_all_blocks=1 00:06:22.962 --rc geninfo_unexecuted_blocks=1 00:06:22.962 00:06:22.962 ' 00:06:22.962 10:00:36 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:22.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.963 --rc genhtml_branch_coverage=1 00:06:22.963 --rc genhtml_function_coverage=1 00:06:22.963 --rc genhtml_legend=1 00:06:22.963 --rc geninfo_all_blocks=1 00:06:22.963 --rc geninfo_unexecuted_blocks=1 00:06:22.963 00:06:22.963 ' 00:06:22.963 10:00:36 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:22.963 OK 00:06:22.963 10:00:36 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:22.963 00:06:22.963 real 0m0.208s 00:06:22.963 user 0m0.128s 00:06:22.963 sys 0m0.094s 00:06:22.963 10:00:36 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.963 ************************************ 00:06:22.963 10:00:36 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:22.963 END TEST rpc_client 00:06:22.963 ************************************ 00:06:23.222 10:00:36 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:23.222 10:00:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.222 10:00:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.222 10:00:36 -- common/autotest_common.sh@10 -- # set +x 00:06:23.222 ************************************ 00:06:23.222 START TEST json_config 00:06:23.222 ************************************ 00:06:23.222 10:00:36 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:23.222 10:00:36 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:23.222 10:00:36 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:23.222 10:00:36 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:23.222 10:00:37 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:23.222 10:00:37 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.222 10:00:37 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.222 10:00:37 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.222 10:00:37 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.222 10:00:37 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.222 10:00:37 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.222 10:00:37 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.222 10:00:37 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.222 10:00:37 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.222 10:00:37 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.222 10:00:37 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.222 10:00:37 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:23.222 10:00:37 json_config -- scripts/common.sh@345 -- # : 1 00:06:23.222 10:00:37 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.222 10:00:37 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.222 10:00:37 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:23.222 10:00:37 json_config -- scripts/common.sh@353 -- # local d=1 00:06:23.222 10:00:37 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.222 10:00:37 json_config -- scripts/common.sh@355 -- # echo 1 00:06:23.222 10:00:37 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.222 10:00:37 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:23.222 10:00:37 json_config -- scripts/common.sh@353 -- # local d=2 00:06:23.222 10:00:37 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.222 10:00:37 json_config -- scripts/common.sh@355 -- # echo 2 00:06:23.222 10:00:37 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.222 10:00:37 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.222 10:00:37 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.223 10:00:37 json_config -- scripts/common.sh@368 -- # return 0 00:06:23.223 10:00:37 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.223 10:00:37 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:23.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.223 --rc genhtml_branch_coverage=1 00:06:23.223 --rc genhtml_function_coverage=1 00:06:23.223 --rc genhtml_legend=1 00:06:23.223 --rc geninfo_all_blocks=1 00:06:23.223 --rc geninfo_unexecuted_blocks=1 00:06:23.223 00:06:23.223 ' 00:06:23.223 10:00:37 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:23.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.223 --rc genhtml_branch_coverage=1 00:06:23.223 --rc genhtml_function_coverage=1 00:06:23.223 --rc genhtml_legend=1 00:06:23.223 --rc geninfo_all_blocks=1 00:06:23.223 --rc geninfo_unexecuted_blocks=1 00:06:23.223 00:06:23.223 ' 00:06:23.223 10:00:37 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:23.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.223 --rc genhtml_branch_coverage=1 00:06:23.223 --rc genhtml_function_coverage=1 00:06:23.223 --rc genhtml_legend=1 00:06:23.223 --rc geninfo_all_blocks=1 00:06:23.223 --rc geninfo_unexecuted_blocks=1 00:06:23.223 00:06:23.223 ' 00:06:23.223 10:00:37 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:23.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.223 --rc genhtml_branch_coverage=1 00:06:23.223 --rc genhtml_function_coverage=1 00:06:23.223 --rc genhtml_legend=1 00:06:23.223 --rc geninfo_all_blocks=1 00:06:23.223 --rc geninfo_unexecuted_blocks=1 00:06:23.223 00:06:23.223 ' 00:06:23.223 10:00:37 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:23.223 10:00:37 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:23.223 10:00:37 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.223 10:00:37 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.223 10:00:37 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.223 10:00:37 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.223 10:00:37 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.223 10:00:37 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.223 10:00:37 json_config -- paths/export.sh@5 -- # export PATH 00:06:23.223 10:00:37 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@51 -- # : 0 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:23.223 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:23.223 10:00:37 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:23.223 10:00:37 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:23.223 10:00:37 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:23.223 10:00:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:23.223 10:00:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:23.223 10:00:37 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:23.223 10:00:37 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:23.223 10:00:37 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:23.223 10:00:37 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:23.223 10:00:37 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:23.223 10:00:37 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:23.223 10:00:37 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:23.223 10:00:37 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:23.223 INFO: JSON configuration test init 00:06:23.223 10:00:37 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:23.223 10:00:37 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:23.223 10:00:37 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:23.223 10:00:37 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:23.223 10:00:37 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:23.223 10:00:37 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:23.223 10:00:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:23.223 10:00:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.223 10:00:37 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:23.223 10:00:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:23.223 10:00:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.223 Waiting for target to run... 00:06:23.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:23.223 10:00:37 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:23.223 10:00:37 json_config -- json_config/common.sh@9 -- # local app=target 00:06:23.223 10:00:37 json_config -- json_config/common.sh@10 -- # shift 00:06:23.223 10:00:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:23.223 10:00:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:23.223 10:00:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:23.223 10:00:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:23.223 10:00:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:23.223 10:00:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57267 00:06:23.223 10:00:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:23.223 10:00:37 json_config -- json_config/common.sh@25 -- # waitforlisten 57267 /var/tmp/spdk_tgt.sock 00:06:23.223 10:00:37 json_config -- common/autotest_common.sh@835 -- # '[' -z 57267 ']' 00:06:23.223 10:00:37 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:23.223 10:00:37 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:23.223 10:00:37 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.223 10:00:37 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:23.223 10:00:37 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.223 10:00:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.483 [2024-11-19 10:00:37.137064] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:23.483 [2024-11-19 10:00:37.137164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57267 ] 00:06:23.743 [2024-11-19 10:00:37.570443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.743 [2024-11-19 10:00:37.621101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.312 00:06:24.312 10:00:38 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.312 10:00:38 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:24.312 10:00:38 json_config -- json_config/common.sh@26 -- # echo '' 00:06:24.312 10:00:38 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:24.312 10:00:38 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:24.312 10:00:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:24.312 10:00:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.312 10:00:38 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:24.312 10:00:38 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:24.312 10:00:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:24.312 10:00:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.572 10:00:38 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:24.572 10:00:38 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:24.572 10:00:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:24.831 [2024-11-19 10:00:38.542139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.091 10:00:38 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:25.091 10:00:38 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:25.091 10:00:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:25.091 10:00:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.091 10:00:38 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:25.091 10:00:38 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:25.091 10:00:38 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:25.091 10:00:38 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:25.091 10:00:38 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:25.091 10:00:38 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:25.091 10:00:38 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:25.091 10:00:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:25.350 10:00:39 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:25.350 10:00:39 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:25.350 10:00:39 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:25.350 10:00:39 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:25.350 10:00:39 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:25.351 10:00:39 json_config -- json_config/json_config.sh@54 -- # sort 00:06:25.351 10:00:39 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:25.351 10:00:39 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:25.351 10:00:39 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:25.351 10:00:39 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:25.351 10:00:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:25.351 10:00:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.351 10:00:39 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:25.351 10:00:39 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:25.351 10:00:39 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:25.351 10:00:39 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:25.351 10:00:39 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:25.351 10:00:39 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:25.351 10:00:39 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:25.351 10:00:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:25.351 10:00:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.351 10:00:39 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:25.351 10:00:39 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:25.351 10:00:39 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:25.351 10:00:39 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:25.351 10:00:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:25.609 MallocForNvmf0 00:06:25.609 10:00:39 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:25.609 10:00:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:25.868 MallocForNvmf1 00:06:25.868 10:00:39 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:25.868 10:00:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:26.128 [2024-11-19 10:00:39.869843] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.128 10:00:39 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:26.128 10:00:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:26.387 10:00:40 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:26.387 10:00:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:26.647 10:00:40 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:26.647 10:00:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:26.906 10:00:40 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:26.906 10:00:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:27.166 [2024-11-19 10:00:40.866473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:27.166 10:00:40 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:27.166 10:00:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:27.166 10:00:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.166 10:00:40 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:27.166 10:00:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:27.166 10:00:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.166 10:00:40 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:27.166 10:00:40 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:27.166 10:00:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:27.425 MallocBdevForConfigChangeCheck 00:06:27.425 10:00:41 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:27.425 10:00:41 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:27.425 10:00:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.425 10:00:41 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:27.425 10:00:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:27.994 INFO: shutting down applications... 00:06:27.994 10:00:41 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:27.994 10:00:41 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:27.994 10:00:41 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:27.994 10:00:41 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:27.994 10:00:41 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:28.254 Calling clear_iscsi_subsystem 00:06:28.254 Calling clear_nvmf_subsystem 00:06:28.254 Calling clear_nbd_subsystem 00:06:28.254 Calling clear_ublk_subsystem 00:06:28.254 Calling clear_vhost_blk_subsystem 00:06:28.254 Calling clear_vhost_scsi_subsystem 00:06:28.254 Calling clear_bdev_subsystem 00:06:28.254 10:00:42 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:28.254 10:00:42 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:28.254 10:00:42 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:28.254 10:00:42 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:28.254 10:00:42 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:28.254 10:00:42 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:28.823 10:00:42 json_config -- json_config/json_config.sh@352 -- # break 00:06:28.823 10:00:42 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:28.823 10:00:42 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:28.823 10:00:42 json_config -- json_config/common.sh@31 -- # local app=target 00:06:28.823 10:00:42 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:28.823 10:00:42 json_config -- json_config/common.sh@35 -- # [[ -n 57267 ]] 00:06:28.823 10:00:42 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57267 00:06:28.823 10:00:42 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:28.823 10:00:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:28.823 10:00:42 json_config -- json_config/common.sh@41 -- # kill -0 57267 00:06:28.823 10:00:42 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:29.392 10:00:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:29.392 10:00:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.392 10:00:43 json_config -- json_config/common.sh@41 -- # kill -0 57267 00:06:29.392 10:00:43 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:29.392 10:00:43 json_config -- json_config/common.sh@43 -- # break 00:06:29.392 10:00:43 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:29.392 SPDK target shutdown done 00:06:29.392 10:00:43 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:29.392 INFO: relaunching applications... 00:06:29.392 10:00:43 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:29.392 10:00:43 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:29.392 10:00:43 json_config -- json_config/common.sh@9 -- # local app=target 00:06:29.392 10:00:43 json_config -- json_config/common.sh@10 -- # shift 00:06:29.392 10:00:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:29.392 10:00:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:29.392 10:00:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:29.392 10:00:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:29.392 10:00:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:29.392 10:00:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57463 00:06:29.392 10:00:43 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:29.392 10:00:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:29.392 Waiting for target to run... 00:06:29.392 10:00:43 json_config -- json_config/common.sh@25 -- # waitforlisten 57463 /var/tmp/spdk_tgt.sock 00:06:29.392 10:00:43 json_config -- common/autotest_common.sh@835 -- # '[' -z 57463 ']' 00:06:29.392 10:00:43 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:29.392 10:00:43 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:29.392 10:00:43 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:29.392 10:00:43 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.392 10:00:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.392 [2024-11-19 10:00:43.078269] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:29.392 [2024-11-19 10:00:43.078372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57463 ] 00:06:29.651 [2024-11-19 10:00:43.513359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.911 [2024-11-19 10:00:43.553290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.911 [2024-11-19 10:00:43.691235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.171 [2024-11-19 10:00:43.903397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.171 [2024-11-19 10:00:43.935464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:30.171 10:00:44 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.171 10:00:44 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:30.171 00:06:30.171 10:00:44 json_config -- json_config/common.sh@26 -- # echo '' 00:06:30.171 10:00:44 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:30.171 INFO: Checking if target configuration is the same... 00:06:30.171 10:00:44 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:30.171 10:00:44 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:30.171 10:00:44 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:30.171 10:00:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:30.171 + '[' 2 -ne 2 ']' 00:06:30.171 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:30.171 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:30.430 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:30.430 +++ basename /dev/fd/62 00:06:30.430 ++ mktemp /tmp/62.XXX 00:06:30.430 + tmp_file_1=/tmp/62.pbH 00:06:30.430 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:30.430 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:30.430 + tmp_file_2=/tmp/spdk_tgt_config.json.zKZ 00:06:30.430 + ret=0 00:06:30.430 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:30.689 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:30.690 + diff -u /tmp/62.pbH /tmp/spdk_tgt_config.json.zKZ 00:06:30.690 INFO: JSON config files are the same 00:06:30.690 + echo 'INFO: JSON config files are the same' 00:06:30.690 + rm /tmp/62.pbH /tmp/spdk_tgt_config.json.zKZ 00:06:30.690 + exit 0 00:06:30.690 10:00:44 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:30.690 INFO: changing configuration and checking if this can be detected... 00:06:30.690 10:00:44 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:30.690 10:00:44 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:30.690 10:00:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:30.949 10:00:44 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:30.949 10:00:44 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:30.949 10:00:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:30.949 + '[' 2 -ne 2 ']' 00:06:30.949 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:30.949 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:30.949 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:30.949 +++ basename /dev/fd/62 00:06:30.949 ++ mktemp /tmp/62.XXX 00:06:30.949 + tmp_file_1=/tmp/62.0X3 00:06:30.949 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:30.949 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:30.949 + tmp_file_2=/tmp/spdk_tgt_config.json.rdf 00:06:30.949 + ret=0 00:06:30.949 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:31.518 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:31.518 + diff -u /tmp/62.0X3 /tmp/spdk_tgt_config.json.rdf 00:06:31.518 + ret=1 00:06:31.518 + echo '=== Start of file: /tmp/62.0X3 ===' 00:06:31.518 + cat /tmp/62.0X3 00:06:31.518 + echo '=== End of file: /tmp/62.0X3 ===' 00:06:31.518 + echo '' 00:06:31.518 + echo '=== Start of file: /tmp/spdk_tgt_config.json.rdf ===' 00:06:31.518 + cat /tmp/spdk_tgt_config.json.rdf 00:06:31.518 + echo '=== End of file: /tmp/spdk_tgt_config.json.rdf ===' 00:06:31.518 + echo '' 00:06:31.518 + rm /tmp/62.0X3 /tmp/spdk_tgt_config.json.rdf 00:06:31.518 + exit 1 00:06:31.518 INFO: configuration change detected. 00:06:31.518 10:00:45 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:31.518 10:00:45 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:31.518 10:00:45 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:31.518 10:00:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:31.518 10:00:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.518 10:00:45 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:31.518 10:00:45 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:31.518 10:00:45 json_config -- json_config/json_config.sh@324 -- # [[ -n 57463 ]] 00:06:31.518 10:00:45 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:31.518 10:00:45 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:31.519 10:00:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:31.519 10:00:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.519 10:00:45 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:31.519 10:00:45 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:31.519 10:00:45 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:31.519 10:00:45 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:31.519 10:00:45 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:31.519 10:00:45 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:31.519 10:00:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:31.519 10:00:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.519 10:00:45 json_config -- json_config/json_config.sh@330 -- # killprocess 57463 00:06:31.519 10:00:45 json_config -- common/autotest_common.sh@954 -- # '[' -z 57463 ']' 00:06:31.519 10:00:45 json_config -- common/autotest_common.sh@958 -- # kill -0 57463 00:06:31.519 10:00:45 json_config -- common/autotest_common.sh@959 -- # uname 00:06:31.519 10:00:45 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.519 10:00:45 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57463 00:06:31.519 killing process with pid 57463 00:06:31.519 10:00:45 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.519 10:00:45 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.519 10:00:45 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57463' 00:06:31.519 10:00:45 json_config -- common/autotest_common.sh@973 -- # kill 57463 00:06:31.519 10:00:45 json_config -- common/autotest_common.sh@978 -- # wait 57463 00:06:31.778 10:00:45 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:31.778 10:00:45 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:31.778 10:00:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:31.778 10:00:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.778 10:00:45 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:31.778 10:00:45 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:31.778 INFO: Success 00:06:31.778 ************************************ 00:06:31.778 END TEST json_config 00:06:31.778 ************************************ 00:06:31.778 00:06:31.778 real 0m8.751s 00:06:31.778 user 0m12.578s 00:06:31.778 sys 0m1.792s 00:06:31.778 10:00:45 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.778 10:00:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.056 10:00:45 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:32.056 10:00:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.056 10:00:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.056 10:00:45 -- common/autotest_common.sh@10 -- # set +x 00:06:32.056 ************************************ 00:06:32.056 START TEST json_config_extra_key 00:06:32.056 ************************************ 00:06:32.057 10:00:45 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:32.057 10:00:45 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:32.057 10:00:45 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:32.057 10:00:45 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:32.057 10:00:45 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:32.057 10:00:45 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.057 10:00:45 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:32.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.057 --rc genhtml_branch_coverage=1 00:06:32.057 --rc genhtml_function_coverage=1 00:06:32.057 --rc genhtml_legend=1 00:06:32.057 --rc geninfo_all_blocks=1 00:06:32.057 --rc geninfo_unexecuted_blocks=1 00:06:32.057 00:06:32.057 ' 00:06:32.057 10:00:45 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:32.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.057 --rc genhtml_branch_coverage=1 00:06:32.057 --rc genhtml_function_coverage=1 00:06:32.057 --rc genhtml_legend=1 00:06:32.057 --rc geninfo_all_blocks=1 00:06:32.057 --rc geninfo_unexecuted_blocks=1 00:06:32.057 00:06:32.057 ' 00:06:32.057 10:00:45 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:32.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.057 --rc genhtml_branch_coverage=1 00:06:32.057 --rc genhtml_function_coverage=1 00:06:32.057 --rc genhtml_legend=1 00:06:32.057 --rc geninfo_all_blocks=1 00:06:32.057 --rc geninfo_unexecuted_blocks=1 00:06:32.057 00:06:32.057 ' 00:06:32.057 10:00:45 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:32.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.057 --rc genhtml_branch_coverage=1 00:06:32.057 --rc genhtml_function_coverage=1 00:06:32.057 --rc genhtml_legend=1 00:06:32.057 --rc geninfo_all_blocks=1 00:06:32.057 --rc geninfo_unexecuted_blocks=1 00:06:32.057 00:06:32.057 ' 00:06:32.057 10:00:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.057 10:00:45 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.057 10:00:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.057 10:00:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.057 10:00:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.057 10:00:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:32.057 10:00:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:32.057 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:32.057 10:00:45 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:32.057 10:00:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:32.057 10:00:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:32.057 10:00:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:32.057 10:00:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:32.057 10:00:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:32.057 10:00:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:32.057 10:00:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:32.057 INFO: launching applications... 00:06:32.057 Waiting for target to run... 00:06:32.057 10:00:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:32.057 10:00:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:32.057 10:00:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:32.057 10:00:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:32.057 10:00:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:32.057 10:00:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:32.058 10:00:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:32.058 10:00:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:32.058 10:00:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:32.058 10:00:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:32.058 10:00:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:32.058 10:00:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:32.058 10:00:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57617 00:06:32.058 10:00:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:32.058 10:00:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57617 /var/tmp/spdk_tgt.sock 00:06:32.058 10:00:45 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:32.058 10:00:45 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57617 ']' 00:06:32.058 10:00:45 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:32.058 10:00:45 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.058 10:00:45 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:32.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:32.058 10:00:45 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.058 10:00:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:32.351 [2024-11-19 10:00:45.946834] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:32.351 [2024-11-19 10:00:45.947208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57617 ] 00:06:32.610 [2024-11-19 10:00:46.390740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.610 [2024-11-19 10:00:46.429644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.610 [2024-11-19 10:00:46.461998] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.179 10:00:46 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.179 10:00:46 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:33.179 10:00:46 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:33.179 00:06:33.179 INFO: shutting down applications... 00:06:33.179 10:00:46 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:33.179 10:00:46 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:33.179 10:00:46 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:33.179 10:00:46 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:33.179 10:00:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57617 ]] 00:06:33.179 10:00:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57617 00:06:33.179 10:00:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:33.179 10:00:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:33.179 10:00:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57617 00:06:33.179 10:00:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:33.746 10:00:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:33.746 10:00:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:33.746 10:00:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57617 00:06:33.746 10:00:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:33.746 10:00:47 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:33.746 10:00:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:33.746 10:00:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:33.746 SPDK target shutdown done 00:06:33.746 Success 00:06:33.746 10:00:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:33.746 ************************************ 00:06:33.747 END TEST json_config_extra_key 00:06:33.747 ************************************ 00:06:33.747 00:06:33.747 real 0m1.753s 00:06:33.747 user 0m1.621s 00:06:33.747 sys 0m0.458s 00:06:33.747 10:00:47 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.747 10:00:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:33.747 10:00:47 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:33.747 10:00:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.747 10:00:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.747 10:00:47 -- common/autotest_common.sh@10 -- # set +x 00:06:33.747 ************************************ 00:06:33.747 START TEST alias_rpc 00:06:33.747 ************************************ 00:06:33.747 10:00:47 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:33.747 * Looking for test storage... 00:06:33.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:33.747 10:00:47 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:33.747 10:00:47 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:33.747 10:00:47 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:34.005 10:00:47 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:34.005 10:00:47 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.005 10:00:47 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.005 10:00:47 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.005 10:00:47 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.005 10:00:47 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.005 10:00:47 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.005 10:00:47 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.005 10:00:47 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.006 10:00:47 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.006 10:00:47 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.006 10:00:47 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.006 10:00:47 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:34.006 10:00:47 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:34.006 10:00:47 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.006 10:00:47 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.006 10:00:47 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:34.006 10:00:47 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:34.006 10:00:47 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.006 10:00:47 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:34.006 10:00:47 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.006 10:00:47 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:34.006 10:00:47 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:34.006 10:00:47 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.006 10:00:47 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:34.006 10:00:47 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.006 10:00:47 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.006 10:00:47 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.006 10:00:47 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:34.006 10:00:47 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.006 10:00:47 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:34.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.006 --rc genhtml_branch_coverage=1 00:06:34.006 --rc genhtml_function_coverage=1 00:06:34.006 --rc genhtml_legend=1 00:06:34.006 --rc geninfo_all_blocks=1 00:06:34.006 --rc geninfo_unexecuted_blocks=1 00:06:34.006 00:06:34.006 ' 00:06:34.006 10:00:47 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:34.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.006 --rc genhtml_branch_coverage=1 00:06:34.006 --rc genhtml_function_coverage=1 00:06:34.006 --rc genhtml_legend=1 00:06:34.006 --rc geninfo_all_blocks=1 00:06:34.006 --rc geninfo_unexecuted_blocks=1 00:06:34.006 00:06:34.006 ' 00:06:34.006 10:00:47 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:34.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.006 --rc genhtml_branch_coverage=1 00:06:34.006 --rc genhtml_function_coverage=1 00:06:34.006 --rc genhtml_legend=1 00:06:34.006 --rc geninfo_all_blocks=1 00:06:34.006 --rc geninfo_unexecuted_blocks=1 00:06:34.006 00:06:34.006 ' 00:06:34.006 10:00:47 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:34.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.006 --rc genhtml_branch_coverage=1 00:06:34.006 --rc genhtml_function_coverage=1 00:06:34.006 --rc genhtml_legend=1 00:06:34.006 --rc geninfo_all_blocks=1 00:06:34.006 --rc geninfo_unexecuted_blocks=1 00:06:34.006 00:06:34.006 ' 00:06:34.006 10:00:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:34.006 10:00:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57689 00:06:34.006 10:00:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57689 00:06:34.006 10:00:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:34.006 10:00:47 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57689 ']' 00:06:34.006 10:00:47 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.006 10:00:47 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.006 10:00:47 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.006 10:00:47 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.006 10:00:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.006 [2024-11-19 10:00:47.760149] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:34.006 [2024-11-19 10:00:47.760254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57689 ] 00:06:34.265 [2024-11-19 10:00:47.908586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.265 [2024-11-19 10:00:47.962809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.265 [2024-11-19 10:00:48.033783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.523 10:00:48 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.523 10:00:48 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:34.523 10:00:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:34.782 10:00:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57689 00:06:34.782 10:00:48 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57689 ']' 00:06:34.782 10:00:48 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57689 00:06:34.782 10:00:48 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:34.782 10:00:48 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.782 10:00:48 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57689 00:06:34.782 10:00:48 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.782 killing process with pid 57689 00:06:34.782 10:00:48 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.782 10:00:48 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57689' 00:06:34.782 10:00:48 alias_rpc -- common/autotest_common.sh@973 -- # kill 57689 00:06:34.782 10:00:48 alias_rpc -- common/autotest_common.sh@978 -- # wait 57689 00:06:35.350 00:06:35.350 real 0m1.525s 00:06:35.350 user 0m1.641s 00:06:35.350 sys 0m0.425s 00:06:35.350 ************************************ 00:06:35.350 END TEST alias_rpc 00:06:35.350 ************************************ 00:06:35.350 10:00:49 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.350 10:00:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.350 10:00:49 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:35.350 10:00:49 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:35.350 10:00:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.350 10:00:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.350 10:00:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.350 ************************************ 00:06:35.350 START TEST spdkcli_tcp 00:06:35.350 ************************************ 00:06:35.350 10:00:49 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:35.350 * Looking for test storage... 00:06:35.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:35.350 10:00:49 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:35.350 10:00:49 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:35.350 10:00:49 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:35.608 10:00:49 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.608 10:00:49 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:35.608 10:00:49 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.608 10:00:49 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:35.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.608 --rc genhtml_branch_coverage=1 00:06:35.608 --rc genhtml_function_coverage=1 00:06:35.608 --rc genhtml_legend=1 00:06:35.608 --rc geninfo_all_blocks=1 00:06:35.608 --rc geninfo_unexecuted_blocks=1 00:06:35.608 00:06:35.608 ' 00:06:35.608 10:00:49 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:35.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.608 --rc genhtml_branch_coverage=1 00:06:35.608 --rc genhtml_function_coverage=1 00:06:35.608 --rc genhtml_legend=1 00:06:35.608 --rc geninfo_all_blocks=1 00:06:35.608 --rc geninfo_unexecuted_blocks=1 00:06:35.608 00:06:35.609 ' 00:06:35.609 10:00:49 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:35.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.609 --rc genhtml_branch_coverage=1 00:06:35.609 --rc genhtml_function_coverage=1 00:06:35.609 --rc genhtml_legend=1 00:06:35.609 --rc geninfo_all_blocks=1 00:06:35.609 --rc geninfo_unexecuted_blocks=1 00:06:35.609 00:06:35.609 ' 00:06:35.609 10:00:49 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:35.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.609 --rc genhtml_branch_coverage=1 00:06:35.609 --rc genhtml_function_coverage=1 00:06:35.609 --rc genhtml_legend=1 00:06:35.609 --rc geninfo_all_blocks=1 00:06:35.609 --rc geninfo_unexecuted_blocks=1 00:06:35.609 00:06:35.609 ' 00:06:35.609 10:00:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:35.609 10:00:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:35.609 10:00:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:35.609 10:00:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:35.609 10:00:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:35.609 10:00:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:35.609 10:00:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:35.609 10:00:49 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:35.609 10:00:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:35.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.609 10:00:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57766 00:06:35.609 10:00:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57766 00:06:35.609 10:00:49 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57766 ']' 00:06:35.609 10:00:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:35.609 10:00:49 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.609 10:00:49 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.609 10:00:49 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.609 10:00:49 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.609 10:00:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:35.609 [2024-11-19 10:00:49.347545] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:35.609 [2024-11-19 10:00:49.347660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57766 ] 00:06:35.609 [2024-11-19 10:00:49.497457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.866 [2024-11-19 10:00:49.558530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.867 [2024-11-19 10:00:49.558538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.867 [2024-11-19 10:00:49.633735] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.125 10:00:49 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.125 10:00:49 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:36.125 10:00:49 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57781 00:06:36.125 10:00:49 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:36.125 10:00:49 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:36.384 [ 00:06:36.384 "bdev_malloc_delete", 00:06:36.384 "bdev_malloc_create", 00:06:36.384 "bdev_null_resize", 00:06:36.384 "bdev_null_delete", 00:06:36.384 "bdev_null_create", 00:06:36.384 "bdev_nvme_cuse_unregister", 00:06:36.384 "bdev_nvme_cuse_register", 00:06:36.384 "bdev_opal_new_user", 00:06:36.384 "bdev_opal_set_lock_state", 00:06:36.384 "bdev_opal_delete", 00:06:36.384 "bdev_opal_get_info", 00:06:36.384 "bdev_opal_create", 00:06:36.384 "bdev_nvme_opal_revert", 00:06:36.384 "bdev_nvme_opal_init", 00:06:36.384 "bdev_nvme_send_cmd", 00:06:36.384 "bdev_nvme_set_keys", 00:06:36.384 "bdev_nvme_get_path_iostat", 00:06:36.384 "bdev_nvme_get_mdns_discovery_info", 00:06:36.384 "bdev_nvme_stop_mdns_discovery", 00:06:36.384 "bdev_nvme_start_mdns_discovery", 00:06:36.384 "bdev_nvme_set_multipath_policy", 00:06:36.384 "bdev_nvme_set_preferred_path", 00:06:36.384 "bdev_nvme_get_io_paths", 00:06:36.384 "bdev_nvme_remove_error_injection", 00:06:36.384 "bdev_nvme_add_error_injection", 00:06:36.384 "bdev_nvme_get_discovery_info", 00:06:36.384 "bdev_nvme_stop_discovery", 00:06:36.384 "bdev_nvme_start_discovery", 00:06:36.384 "bdev_nvme_get_controller_health_info", 00:06:36.384 "bdev_nvme_disable_controller", 00:06:36.384 "bdev_nvme_enable_controller", 00:06:36.384 "bdev_nvme_reset_controller", 00:06:36.384 "bdev_nvme_get_transport_statistics", 00:06:36.385 "bdev_nvme_apply_firmware", 00:06:36.385 "bdev_nvme_detach_controller", 00:06:36.385 "bdev_nvme_get_controllers", 00:06:36.385 "bdev_nvme_attach_controller", 00:06:36.385 "bdev_nvme_set_hotplug", 00:06:36.385 "bdev_nvme_set_options", 00:06:36.385 "bdev_passthru_delete", 00:06:36.385 "bdev_passthru_create", 00:06:36.385 "bdev_lvol_set_parent_bdev", 00:06:36.385 "bdev_lvol_set_parent", 00:06:36.385 "bdev_lvol_check_shallow_copy", 00:06:36.385 "bdev_lvol_start_shallow_copy", 00:06:36.385 "bdev_lvol_grow_lvstore", 00:06:36.385 "bdev_lvol_get_lvols", 00:06:36.385 "bdev_lvol_get_lvstores", 00:06:36.385 "bdev_lvol_delete", 00:06:36.385 "bdev_lvol_set_read_only", 00:06:36.385 "bdev_lvol_resize", 00:06:36.385 "bdev_lvol_decouple_parent", 00:06:36.385 "bdev_lvol_inflate", 00:06:36.385 "bdev_lvol_rename", 00:06:36.385 "bdev_lvol_clone_bdev", 00:06:36.385 "bdev_lvol_clone", 00:06:36.385 "bdev_lvol_snapshot", 00:06:36.385 "bdev_lvol_create", 00:06:36.385 "bdev_lvol_delete_lvstore", 00:06:36.385 "bdev_lvol_rename_lvstore", 00:06:36.385 "bdev_lvol_create_lvstore", 00:06:36.385 "bdev_raid_set_options", 00:06:36.385 "bdev_raid_remove_base_bdev", 00:06:36.385 "bdev_raid_add_base_bdev", 00:06:36.385 "bdev_raid_delete", 00:06:36.385 "bdev_raid_create", 00:06:36.385 "bdev_raid_get_bdevs", 00:06:36.385 "bdev_error_inject_error", 00:06:36.385 "bdev_error_delete", 00:06:36.385 "bdev_error_create", 00:06:36.385 "bdev_split_delete", 00:06:36.385 "bdev_split_create", 00:06:36.385 "bdev_delay_delete", 00:06:36.385 "bdev_delay_create", 00:06:36.385 "bdev_delay_update_latency", 00:06:36.385 "bdev_zone_block_delete", 00:06:36.385 "bdev_zone_block_create", 00:06:36.385 "blobfs_create", 00:06:36.385 "blobfs_detect", 00:06:36.385 "blobfs_set_cache_size", 00:06:36.385 "bdev_aio_delete", 00:06:36.385 "bdev_aio_rescan", 00:06:36.385 "bdev_aio_create", 00:06:36.385 "bdev_ftl_set_property", 00:06:36.385 "bdev_ftl_get_properties", 00:06:36.385 "bdev_ftl_get_stats", 00:06:36.385 "bdev_ftl_unmap", 00:06:36.385 "bdev_ftl_unload", 00:06:36.385 "bdev_ftl_delete", 00:06:36.385 "bdev_ftl_load", 00:06:36.385 "bdev_ftl_create", 00:06:36.385 "bdev_virtio_attach_controller", 00:06:36.385 "bdev_virtio_scsi_get_devices", 00:06:36.385 "bdev_virtio_detach_controller", 00:06:36.385 "bdev_virtio_blk_set_hotplug", 00:06:36.385 "bdev_iscsi_delete", 00:06:36.385 "bdev_iscsi_create", 00:06:36.385 "bdev_iscsi_set_options", 00:06:36.385 "bdev_uring_delete", 00:06:36.385 "bdev_uring_rescan", 00:06:36.385 "bdev_uring_create", 00:06:36.385 "accel_error_inject_error", 00:06:36.385 "ioat_scan_accel_module", 00:06:36.385 "dsa_scan_accel_module", 00:06:36.385 "iaa_scan_accel_module", 00:06:36.385 "keyring_file_remove_key", 00:06:36.385 "keyring_file_add_key", 00:06:36.385 "keyring_linux_set_options", 00:06:36.385 "fsdev_aio_delete", 00:06:36.385 "fsdev_aio_create", 00:06:36.385 "iscsi_get_histogram", 00:06:36.385 "iscsi_enable_histogram", 00:06:36.385 "iscsi_set_options", 00:06:36.385 "iscsi_get_auth_groups", 00:06:36.385 "iscsi_auth_group_remove_secret", 00:06:36.385 "iscsi_auth_group_add_secret", 00:06:36.385 "iscsi_delete_auth_group", 00:06:36.385 "iscsi_create_auth_group", 00:06:36.385 "iscsi_set_discovery_auth", 00:06:36.385 "iscsi_get_options", 00:06:36.385 "iscsi_target_node_request_logout", 00:06:36.385 "iscsi_target_node_set_redirect", 00:06:36.385 "iscsi_target_node_set_auth", 00:06:36.385 "iscsi_target_node_add_lun", 00:06:36.385 "iscsi_get_stats", 00:06:36.385 "iscsi_get_connections", 00:06:36.385 "iscsi_portal_group_set_auth", 00:06:36.385 "iscsi_start_portal_group", 00:06:36.385 "iscsi_delete_portal_group", 00:06:36.385 "iscsi_create_portal_group", 00:06:36.385 "iscsi_get_portal_groups", 00:06:36.385 "iscsi_delete_target_node", 00:06:36.385 "iscsi_target_node_remove_pg_ig_maps", 00:06:36.385 "iscsi_target_node_add_pg_ig_maps", 00:06:36.385 "iscsi_create_target_node", 00:06:36.385 "iscsi_get_target_nodes", 00:06:36.385 "iscsi_delete_initiator_group", 00:06:36.385 "iscsi_initiator_group_remove_initiators", 00:06:36.385 "iscsi_initiator_group_add_initiators", 00:06:36.385 "iscsi_create_initiator_group", 00:06:36.385 "iscsi_get_initiator_groups", 00:06:36.385 "nvmf_set_crdt", 00:06:36.385 "nvmf_set_config", 00:06:36.385 "nvmf_set_max_subsystems", 00:06:36.385 "nvmf_stop_mdns_prr", 00:06:36.385 "nvmf_publish_mdns_prr", 00:06:36.385 "nvmf_subsystem_get_listeners", 00:06:36.385 "nvmf_subsystem_get_qpairs", 00:06:36.385 "nvmf_subsystem_get_controllers", 00:06:36.385 "nvmf_get_stats", 00:06:36.385 "nvmf_get_transports", 00:06:36.385 "nvmf_create_transport", 00:06:36.385 "nvmf_get_targets", 00:06:36.385 "nvmf_delete_target", 00:06:36.385 "nvmf_create_target", 00:06:36.385 "nvmf_subsystem_allow_any_host", 00:06:36.385 "nvmf_subsystem_set_keys", 00:06:36.385 "nvmf_subsystem_remove_host", 00:06:36.385 "nvmf_subsystem_add_host", 00:06:36.385 "nvmf_ns_remove_host", 00:06:36.385 "nvmf_ns_add_host", 00:06:36.385 "nvmf_subsystem_remove_ns", 00:06:36.385 "nvmf_subsystem_set_ns_ana_group", 00:06:36.385 "nvmf_subsystem_add_ns", 00:06:36.385 "nvmf_subsystem_listener_set_ana_state", 00:06:36.385 "nvmf_discovery_get_referrals", 00:06:36.385 "nvmf_discovery_remove_referral", 00:06:36.385 "nvmf_discovery_add_referral", 00:06:36.385 "nvmf_subsystem_remove_listener", 00:06:36.385 "nvmf_subsystem_add_listener", 00:06:36.385 "nvmf_delete_subsystem", 00:06:36.385 "nvmf_create_subsystem", 00:06:36.385 "nvmf_get_subsystems", 00:06:36.385 "env_dpdk_get_mem_stats", 00:06:36.385 "nbd_get_disks", 00:06:36.385 "nbd_stop_disk", 00:06:36.385 "nbd_start_disk", 00:06:36.385 "ublk_recover_disk", 00:06:36.385 "ublk_get_disks", 00:06:36.385 "ublk_stop_disk", 00:06:36.385 "ublk_start_disk", 00:06:36.385 "ublk_destroy_target", 00:06:36.385 "ublk_create_target", 00:06:36.385 "virtio_blk_create_transport", 00:06:36.385 "virtio_blk_get_transports", 00:06:36.385 "vhost_controller_set_coalescing", 00:06:36.385 "vhost_get_controllers", 00:06:36.385 "vhost_delete_controller", 00:06:36.385 "vhost_create_blk_controller", 00:06:36.385 "vhost_scsi_controller_remove_target", 00:06:36.385 "vhost_scsi_controller_add_target", 00:06:36.385 "vhost_start_scsi_controller", 00:06:36.385 "vhost_create_scsi_controller", 00:06:36.385 "thread_set_cpumask", 00:06:36.385 "scheduler_set_options", 00:06:36.385 "framework_get_governor", 00:06:36.385 "framework_get_scheduler", 00:06:36.385 "framework_set_scheduler", 00:06:36.385 "framework_get_reactors", 00:06:36.385 "thread_get_io_channels", 00:06:36.385 "thread_get_pollers", 00:06:36.385 "thread_get_stats", 00:06:36.385 "framework_monitor_context_switch", 00:06:36.385 "spdk_kill_instance", 00:06:36.385 "log_enable_timestamps", 00:06:36.385 "log_get_flags", 00:06:36.385 "log_clear_flag", 00:06:36.385 "log_set_flag", 00:06:36.385 "log_get_level", 00:06:36.385 "log_set_level", 00:06:36.385 "log_get_print_level", 00:06:36.385 "log_set_print_level", 00:06:36.385 "framework_enable_cpumask_locks", 00:06:36.385 "framework_disable_cpumask_locks", 00:06:36.385 "framework_wait_init", 00:06:36.385 "framework_start_init", 00:06:36.385 "scsi_get_devices", 00:06:36.385 "bdev_get_histogram", 00:06:36.385 "bdev_enable_histogram", 00:06:36.385 "bdev_set_qos_limit", 00:06:36.385 "bdev_set_qd_sampling_period", 00:06:36.385 "bdev_get_bdevs", 00:06:36.385 "bdev_reset_iostat", 00:06:36.385 "bdev_get_iostat", 00:06:36.385 "bdev_examine", 00:06:36.385 "bdev_wait_for_examine", 00:06:36.385 "bdev_set_options", 00:06:36.385 "accel_get_stats", 00:06:36.385 "accel_set_options", 00:06:36.385 "accel_set_driver", 00:06:36.385 "accel_crypto_key_destroy", 00:06:36.385 "accel_crypto_keys_get", 00:06:36.385 "accel_crypto_key_create", 00:06:36.385 "accel_assign_opc", 00:06:36.385 "accel_get_module_info", 00:06:36.385 "accel_get_opc_assignments", 00:06:36.385 "vmd_rescan", 00:06:36.385 "vmd_remove_device", 00:06:36.385 "vmd_enable", 00:06:36.385 "sock_get_default_impl", 00:06:36.385 "sock_set_default_impl", 00:06:36.386 "sock_impl_set_options", 00:06:36.386 "sock_impl_get_options", 00:06:36.386 "iobuf_get_stats", 00:06:36.386 "iobuf_set_options", 00:06:36.386 "keyring_get_keys", 00:06:36.386 "framework_get_pci_devices", 00:06:36.386 "framework_get_config", 00:06:36.386 "framework_get_subsystems", 00:06:36.386 "fsdev_set_opts", 00:06:36.386 "fsdev_get_opts", 00:06:36.386 "trace_get_info", 00:06:36.386 "trace_get_tpoint_group_mask", 00:06:36.386 "trace_disable_tpoint_group", 00:06:36.386 "trace_enable_tpoint_group", 00:06:36.386 "trace_clear_tpoint_mask", 00:06:36.386 "trace_set_tpoint_mask", 00:06:36.386 "notify_get_notifications", 00:06:36.386 "notify_get_types", 00:06:36.386 "spdk_get_version", 00:06:36.386 "rpc_get_methods" 00:06:36.386 ] 00:06:36.386 10:00:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:36.386 10:00:50 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:36.386 10:00:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.386 10:00:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:36.386 10:00:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57766 00:06:36.386 10:00:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57766 ']' 00:06:36.386 10:00:50 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57766 00:06:36.386 10:00:50 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:36.386 10:00:50 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.386 10:00:50 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57766 00:06:36.386 killing process with pid 57766 00:06:36.386 10:00:50 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.386 10:00:50 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.386 10:00:50 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57766' 00:06:36.386 10:00:50 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57766 00:06:36.386 10:00:50 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57766 00:06:36.954 ************************************ 00:06:36.954 END TEST spdkcli_tcp 00:06:36.954 ************************************ 00:06:36.954 00:06:36.954 real 0m1.576s 00:06:36.954 user 0m2.681s 00:06:36.954 sys 0m0.501s 00:06:36.954 10:00:50 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.954 10:00:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.954 10:00:50 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:36.954 10:00:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.954 10:00:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.954 10:00:50 -- common/autotest_common.sh@10 -- # set +x 00:06:36.954 ************************************ 00:06:36.954 START TEST dpdk_mem_utility 00:06:36.954 ************************************ 00:06:36.954 10:00:50 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:36.954 * Looking for test storage... 00:06:36.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:36.954 10:00:50 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:36.954 10:00:50 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:36.954 10:00:50 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:37.215 10:00:50 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:37.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.215 10:00:50 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:37.215 10:00:50 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.215 10:00:50 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:37.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.216 --rc genhtml_branch_coverage=1 00:06:37.216 --rc genhtml_function_coverage=1 00:06:37.216 --rc genhtml_legend=1 00:06:37.216 --rc geninfo_all_blocks=1 00:06:37.216 --rc geninfo_unexecuted_blocks=1 00:06:37.216 00:06:37.216 ' 00:06:37.216 10:00:50 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:37.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.216 --rc genhtml_branch_coverage=1 00:06:37.216 --rc genhtml_function_coverage=1 00:06:37.216 --rc genhtml_legend=1 00:06:37.216 --rc geninfo_all_blocks=1 00:06:37.216 --rc geninfo_unexecuted_blocks=1 00:06:37.216 00:06:37.216 ' 00:06:37.216 10:00:50 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:37.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.216 --rc genhtml_branch_coverage=1 00:06:37.216 --rc genhtml_function_coverage=1 00:06:37.216 --rc genhtml_legend=1 00:06:37.216 --rc geninfo_all_blocks=1 00:06:37.216 --rc geninfo_unexecuted_blocks=1 00:06:37.216 00:06:37.216 ' 00:06:37.216 10:00:50 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:37.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.216 --rc genhtml_branch_coverage=1 00:06:37.216 --rc genhtml_function_coverage=1 00:06:37.216 --rc genhtml_legend=1 00:06:37.216 --rc geninfo_all_blocks=1 00:06:37.216 --rc geninfo_unexecuted_blocks=1 00:06:37.216 00:06:37.216 ' 00:06:37.216 10:00:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:37.216 10:00:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57863 00:06:37.216 10:00:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57863 00:06:37.216 10:00:50 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57863 ']' 00:06:37.216 10:00:50 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.216 10:00:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:37.216 10:00:50 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.216 10:00:50 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.216 10:00:50 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.216 10:00:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:37.216 [2024-11-19 10:00:51.017453] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:37.216 [2024-11-19 10:00:51.018005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57863 ] 00:06:37.475 [2024-11-19 10:00:51.172248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.475 [2024-11-19 10:00:51.217842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.475 [2024-11-19 10:00:51.284789] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.737 10:00:51 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.737 10:00:51 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:37.737 10:00:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:37.737 10:00:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:37.737 10:00:51 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.737 10:00:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:37.737 { 00:06:37.737 "filename": "/tmp/spdk_mem_dump.txt" 00:06:37.737 } 00:06:37.737 10:00:51 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.737 10:00:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:37.737 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:37.737 1 heaps totaling size 810.000000 MiB 00:06:37.737 size: 810.000000 MiB heap id: 0 00:06:37.737 end heaps---------- 00:06:37.737 9 mempools totaling size 595.772034 MiB 00:06:37.737 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:37.737 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:37.737 size: 92.545471 MiB name: bdev_io_57863 00:06:37.737 size: 50.003479 MiB name: msgpool_57863 00:06:37.737 size: 36.509338 MiB name: fsdev_io_57863 00:06:37.737 size: 21.763794 MiB name: PDU_Pool 00:06:37.737 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:37.737 size: 4.133484 MiB name: evtpool_57863 00:06:37.737 size: 0.026123 MiB name: Session_Pool 00:06:37.737 end mempools------- 00:06:37.737 6 memzones totaling size 4.142822 MiB 00:06:37.737 size: 1.000366 MiB name: RG_ring_0_57863 00:06:37.737 size: 1.000366 MiB name: RG_ring_1_57863 00:06:37.737 size: 1.000366 MiB name: RG_ring_4_57863 00:06:37.737 size: 1.000366 MiB name: RG_ring_5_57863 00:06:37.737 size: 0.125366 MiB name: RG_ring_2_57863 00:06:37.737 size: 0.015991 MiB name: RG_ring_3_57863 00:06:37.737 end memzones------- 00:06:37.737 10:00:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:37.737 heap id: 0 total size: 810.000000 MiB number of busy elements: 310 number of free elements: 15 00:06:37.737 list of free elements. size: 10.813782 MiB 00:06:37.737 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:37.737 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:37.737 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:37.737 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:37.737 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:37.737 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:37.737 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:37.737 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:37.737 element at address: 0x20001a600000 with size: 0.568054 MiB 00:06:37.737 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:37.737 element at address: 0x200000c00000 with size: 0.487000 MiB 00:06:37.737 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:37.737 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:37.737 element at address: 0x200027a00000 with size: 0.395935 MiB 00:06:37.737 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:37.737 list of standard malloc elements. size: 199.267334 MiB 00:06:37.737 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:37.737 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:37.737 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:37.737 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:37.737 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:37.737 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:37.737 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:37.737 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:37.737 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:37.737 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:37.737 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:37.738 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:37.738 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:37.738 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:37.738 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:37.738 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:37.738 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20001a691780 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20001a691840 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20001a691900 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:06:37.738 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a692080 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a692140 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a692200 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a692380 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a692440 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a692500 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a692680 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a692740 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a692800 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a692980 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a693040 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a693100 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a693280 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a693340 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a693400 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a693580 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a693640 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a693700 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a693880 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a693940 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a694000 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a694180 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a694240 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a694300 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a694480 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a694540 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a694600 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a694780 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a694840 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a694900 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a695080 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a695140 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a695200 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:37.739 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a65680 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6c280 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:06:37.739 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:06:37.740 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:06:37.740 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:06:37.740 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:06:37.740 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:06:37.740 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:06:37.740 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:06:37.740 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:06:37.740 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:06:37.740 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:06:37.740 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:06:37.740 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:06:37.740 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:06:37.740 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:06:37.740 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:06:37.740 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:06:37.740 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:06:37.740 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:06:37.740 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:06:37.740 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:06:37.740 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:06:37.740 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:06:37.740 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:37.740 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:37.740 list of memzone associated elements. size: 599.918884 MiB 00:06:37.740 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:37.740 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:37.740 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:37.740 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:37.740 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:37.740 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57863_0 00:06:37.740 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:37.740 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57863_0 00:06:37.740 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:37.740 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57863_0 00:06:37.740 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:37.740 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:37.740 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:37.740 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:37.740 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:37.740 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57863_0 00:06:37.740 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:37.740 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57863 00:06:37.740 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:37.740 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57863 00:06:37.740 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:37.740 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:37.740 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:37.740 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:37.740 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:37.740 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:37.740 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:37.740 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:37.740 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:37.740 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57863 00:06:37.740 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:37.740 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57863 00:06:37.740 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:37.740 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57863 00:06:37.740 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:37.740 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57863 00:06:37.740 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:37.740 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57863 00:06:37.740 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:37.740 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57863 00:06:37.740 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:37.740 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:37.740 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:37.740 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:37.740 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:37.740 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:37.740 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:37.740 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57863 00:06:37.740 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:37.740 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57863 00:06:37.740 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:37.740 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:37.740 element at address: 0x200027a65740 with size: 0.023743 MiB 00:06:37.740 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:37.740 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:37.740 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57863 00:06:37.740 element at address: 0x200027a6b880 with size: 0.002441 MiB 00:06:37.740 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:37.740 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:37.740 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57863 00:06:37.740 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:37.740 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57863 00:06:37.740 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:37.740 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57863 00:06:37.740 element at address: 0x200027a6c340 with size: 0.000305 MiB 00:06:37.740 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:37.740 10:00:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:37.740 10:00:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57863 00:06:37.740 10:00:51 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57863 ']' 00:06:37.740 10:00:51 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57863 00:06:37.740 10:00:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:37.740 10:00:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.740 10:00:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57863 00:06:38.000 killing process with pid 57863 00:06:38.000 10:00:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.000 10:00:51 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.000 10:00:51 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57863' 00:06:38.000 10:00:51 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57863 00:06:38.000 10:00:51 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57863 00:06:38.259 00:06:38.259 real 0m1.310s 00:06:38.259 user 0m1.280s 00:06:38.259 sys 0m0.438s 00:06:38.259 10:00:52 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.259 ************************************ 00:06:38.259 END TEST dpdk_mem_utility 00:06:38.259 10:00:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:38.259 ************************************ 00:06:38.259 10:00:52 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:38.259 10:00:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.259 10:00:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.259 10:00:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.259 ************************************ 00:06:38.259 START TEST event 00:06:38.259 ************************************ 00:06:38.259 10:00:52 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:38.519 * Looking for test storage... 00:06:38.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:38.519 10:00:52 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:38.519 10:00:52 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:38.519 10:00:52 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:38.519 10:00:52 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:38.519 10:00:52 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.519 10:00:52 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.519 10:00:52 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.519 10:00:52 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.519 10:00:52 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.519 10:00:52 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.519 10:00:52 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.519 10:00:52 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.519 10:00:52 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.519 10:00:52 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.519 10:00:52 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.519 10:00:52 event -- scripts/common.sh@344 -- # case "$op" in 00:06:38.519 10:00:52 event -- scripts/common.sh@345 -- # : 1 00:06:38.519 10:00:52 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.519 10:00:52 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.519 10:00:52 event -- scripts/common.sh@365 -- # decimal 1 00:06:38.519 10:00:52 event -- scripts/common.sh@353 -- # local d=1 00:06:38.519 10:00:52 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.519 10:00:52 event -- scripts/common.sh@355 -- # echo 1 00:06:38.519 10:00:52 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.519 10:00:52 event -- scripts/common.sh@366 -- # decimal 2 00:06:38.519 10:00:52 event -- scripts/common.sh@353 -- # local d=2 00:06:38.519 10:00:52 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.519 10:00:52 event -- scripts/common.sh@355 -- # echo 2 00:06:38.519 10:00:52 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.519 10:00:52 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.519 10:00:52 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.519 10:00:52 event -- scripts/common.sh@368 -- # return 0 00:06:38.519 10:00:52 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.519 10:00:52 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:38.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.519 --rc genhtml_branch_coverage=1 00:06:38.519 --rc genhtml_function_coverage=1 00:06:38.519 --rc genhtml_legend=1 00:06:38.519 --rc geninfo_all_blocks=1 00:06:38.519 --rc geninfo_unexecuted_blocks=1 00:06:38.519 00:06:38.519 ' 00:06:38.519 10:00:52 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:38.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.519 --rc genhtml_branch_coverage=1 00:06:38.519 --rc genhtml_function_coverage=1 00:06:38.519 --rc genhtml_legend=1 00:06:38.519 --rc geninfo_all_blocks=1 00:06:38.519 --rc geninfo_unexecuted_blocks=1 00:06:38.519 00:06:38.519 ' 00:06:38.519 10:00:52 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:38.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.519 --rc genhtml_branch_coverage=1 00:06:38.519 --rc genhtml_function_coverage=1 00:06:38.519 --rc genhtml_legend=1 00:06:38.519 --rc geninfo_all_blocks=1 00:06:38.519 --rc geninfo_unexecuted_blocks=1 00:06:38.519 00:06:38.519 ' 00:06:38.519 10:00:52 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:38.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.519 --rc genhtml_branch_coverage=1 00:06:38.519 --rc genhtml_function_coverage=1 00:06:38.519 --rc genhtml_legend=1 00:06:38.519 --rc geninfo_all_blocks=1 00:06:38.519 --rc geninfo_unexecuted_blocks=1 00:06:38.519 00:06:38.519 ' 00:06:38.519 10:00:52 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:38.519 10:00:52 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:38.519 10:00:52 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:38.519 10:00:52 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:38.519 10:00:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.519 10:00:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.519 ************************************ 00:06:38.519 START TEST event_perf 00:06:38.519 ************************************ 00:06:38.519 10:00:52 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:38.519 Running I/O for 1 seconds...[2024-11-19 10:00:52.304491] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:38.519 [2024-11-19 10:00:52.304724] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57935 ] 00:06:38.778 [2024-11-19 10:00:52.453296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.778 [2024-11-19 10:00:52.514777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.778 [2024-11-19 10:00:52.514905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.778 [2024-11-19 10:00:52.515009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.778 Running I/O for 1 seconds...[2024-11-19 10:00:52.515207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.724 00:06:39.724 lcore 0: 196876 00:06:39.725 lcore 1: 196876 00:06:39.725 lcore 2: 196876 00:06:39.725 lcore 3: 196877 00:06:39.725 done. 00:06:39.725 ************************************ 00:06:39.725 END TEST event_perf 00:06:39.725 ************************************ 00:06:39.725 00:06:39.725 real 0m1.284s 00:06:39.725 user 0m4.108s 00:06:39.725 sys 0m0.051s 00:06:39.725 10:00:53 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.725 10:00:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:39.998 10:00:53 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:39.998 10:00:53 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:39.998 10:00:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.998 10:00:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.998 ************************************ 00:06:39.998 START TEST event_reactor 00:06:39.998 ************************************ 00:06:39.998 10:00:53 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:39.998 [2024-11-19 10:00:53.639596] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:39.998 [2024-11-19 10:00:53.640162] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57972 ] 00:06:39.998 [2024-11-19 10:00:53.781330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.998 [2024-11-19 10:00:53.837237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.377 test_start 00:06:41.377 oneshot 00:06:41.377 tick 100 00:06:41.377 tick 100 00:06:41.377 tick 250 00:06:41.377 tick 100 00:06:41.377 tick 100 00:06:41.377 tick 100 00:06:41.377 tick 250 00:06:41.377 tick 500 00:06:41.377 tick 100 00:06:41.377 tick 100 00:06:41.377 tick 250 00:06:41.377 tick 100 00:06:41.377 tick 100 00:06:41.377 test_end 00:06:41.377 00:06:41.377 real 0m1.260s 00:06:41.377 user 0m1.111s 00:06:41.377 sys 0m0.041s 00:06:41.377 10:00:54 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.377 ************************************ 00:06:41.377 END TEST event_reactor 00:06:41.377 ************************************ 00:06:41.377 10:00:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:41.377 10:00:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:41.377 10:00:54 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:41.377 10:00:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.377 10:00:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.377 ************************************ 00:06:41.377 START TEST event_reactor_perf 00:06:41.377 ************************************ 00:06:41.377 10:00:54 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:41.377 [2024-11-19 10:00:54.953486] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:41.377 [2024-11-19 10:00:54.953571] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58009 ] 00:06:41.377 [2024-11-19 10:00:55.089493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.377 [2024-11-19 10:00:55.138291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.314 test_start 00:06:42.314 test_end 00:06:42.314 Performance: 440424 events per second 00:06:42.314 00:06:42.314 real 0m1.247s 00:06:42.314 user 0m1.108s 00:06:42.314 sys 0m0.034s 00:06:42.314 10:00:56 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.314 ************************************ 00:06:42.314 END TEST event_reactor_perf 00:06:42.314 ************************************ 00:06:42.314 10:00:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:42.573 10:00:56 event -- event/event.sh@49 -- # uname -s 00:06:42.573 10:00:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:42.573 10:00:56 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:42.573 10:00:56 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.573 10:00:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.573 10:00:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.573 ************************************ 00:06:42.573 START TEST event_scheduler 00:06:42.573 ************************************ 00:06:42.573 10:00:56 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:42.573 * Looking for test storage... 00:06:42.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:42.573 10:00:56 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:42.574 10:00:56 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:42.574 10:00:56 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:42.574 10:00:56 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.574 10:00:56 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:42.574 10:00:56 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.574 10:00:56 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:42.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.574 --rc genhtml_branch_coverage=1 00:06:42.574 --rc genhtml_function_coverage=1 00:06:42.574 --rc genhtml_legend=1 00:06:42.574 --rc geninfo_all_blocks=1 00:06:42.574 --rc geninfo_unexecuted_blocks=1 00:06:42.574 00:06:42.574 ' 00:06:42.574 10:00:56 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:42.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.574 --rc genhtml_branch_coverage=1 00:06:42.574 --rc genhtml_function_coverage=1 00:06:42.574 --rc genhtml_legend=1 00:06:42.574 --rc geninfo_all_blocks=1 00:06:42.574 --rc geninfo_unexecuted_blocks=1 00:06:42.574 00:06:42.574 ' 00:06:42.574 10:00:56 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:42.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.574 --rc genhtml_branch_coverage=1 00:06:42.574 --rc genhtml_function_coverage=1 00:06:42.574 --rc genhtml_legend=1 00:06:42.574 --rc geninfo_all_blocks=1 00:06:42.574 --rc geninfo_unexecuted_blocks=1 00:06:42.574 00:06:42.574 ' 00:06:42.574 10:00:56 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:42.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.574 --rc genhtml_branch_coverage=1 00:06:42.574 --rc genhtml_function_coverage=1 00:06:42.574 --rc genhtml_legend=1 00:06:42.574 --rc geninfo_all_blocks=1 00:06:42.574 --rc geninfo_unexecuted_blocks=1 00:06:42.574 00:06:42.574 ' 00:06:42.574 10:00:56 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:42.574 10:00:56 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58073 00:06:42.574 10:00:56 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:42.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.574 10:00:56 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:42.574 10:00:56 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58073 00:06:42.574 10:00:56 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58073 ']' 00:06:42.574 10:00:56 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.574 10:00:56 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.574 10:00:56 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.574 10:00:56 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.574 10:00:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.833 [2024-11-19 10:00:56.470676] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:42.833 [2024-11-19 10:00:56.470769] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58073 ] 00:06:42.833 [2024-11-19 10:00:56.619336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.833 [2024-11-19 10:00:56.667994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.833 [2024-11-19 10:00:56.668154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.833 [2024-11-19 10:00:56.668269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.833 [2024-11-19 10:00:56.668269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.833 10:00:56 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.833 10:00:56 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:42.833 10:00:56 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:42.833 10:00:56 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.833 10:00:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.833 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:42.833 POWER: Cannot set governor of lcore 0 to userspace 00:06:42.833 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:42.833 POWER: Cannot set governor of lcore 0 to performance 00:06:42.833 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:42.833 POWER: Cannot set governor of lcore 0 to userspace 00:06:43.092 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:43.092 POWER: Cannot set governor of lcore 0 to userspace 00:06:43.092 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:43.092 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:43.092 POWER: Unable to set Power Management Environment for lcore 0 00:06:43.092 [2024-11-19 10:00:56.723852] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:43.092 [2024-11-19 10:00:56.724044] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:43.092 [2024-11-19 10:00:56.724235] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:43.092 [2024-11-19 10:00:56.724427] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:43.092 [2024-11-19 10:00:56.724631] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:43.092 [2024-11-19 10:00:56.724818] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:43.092 10:00:56 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.092 10:00:56 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:43.092 10:00:56 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.092 10:00:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:43.092 [2024-11-19 10:00:56.785546] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.092 [2024-11-19 10:00:56.817287] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:43.092 10:00:56 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.092 10:00:56 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:43.092 10:00:56 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.092 10:00:56 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.092 10:00:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:43.092 ************************************ 00:06:43.092 START TEST scheduler_create_thread 00:06:43.092 ************************************ 00:06:43.092 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:43.092 10:00:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:43.092 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.093 2 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.093 3 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.093 4 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.093 5 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.093 6 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.093 7 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.093 8 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.093 9 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.093 10 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.093 10:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.000 10:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.000 10:00:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:45.000 10:00:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:45.000 10:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.000 10:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.568 ************************************ 00:06:45.568 END TEST scheduler_create_thread 00:06:45.568 ************************************ 00:06:45.568 10:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.568 00:06:45.568 real 0m2.615s 00:06:45.568 user 0m0.018s 00:06:45.568 sys 0m0.007s 00:06:45.568 10:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.568 10:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.828 10:00:59 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:45.828 10:00:59 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58073 00:06:45.828 10:00:59 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58073 ']' 00:06:45.828 10:00:59 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58073 00:06:45.828 10:00:59 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:45.828 10:00:59 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.828 10:00:59 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58073 00:06:45.828 killing process with pid 58073 00:06:45.828 10:00:59 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:45.828 10:00:59 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:45.828 10:00:59 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58073' 00:06:45.828 10:00:59 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58073 00:06:45.828 10:00:59 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58073 00:06:46.088 [2024-11-19 10:00:59.925981] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:46.348 00:06:46.348 real 0m3.887s 00:06:46.348 user 0m5.679s 00:06:46.348 sys 0m0.365s 00:06:46.348 10:01:00 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.348 10:01:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.348 ************************************ 00:06:46.348 END TEST event_scheduler 00:06:46.348 ************************************ 00:06:46.348 10:01:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:46.348 10:01:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:46.348 10:01:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.348 10:01:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.348 10:01:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.348 ************************************ 00:06:46.348 START TEST app_repeat 00:06:46.348 ************************************ 00:06:46.348 10:01:00 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:46.348 10:01:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.348 10:01:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.348 10:01:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:46.348 10:01:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.348 10:01:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:46.348 10:01:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:46.348 10:01:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:46.348 10:01:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58165 00:06:46.348 10:01:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:46.348 Process app_repeat pid: 58165 00:06:46.348 10:01:00 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:46.348 10:01:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58165' 00:06:46.348 10:01:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:46.348 spdk_app_start Round 0 00:06:46.348 10:01:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:46.348 10:01:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58165 /var/tmp/spdk-nbd.sock 00:06:46.348 10:01:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58165 ']' 00:06:46.348 10:01:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.348 10:01:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.348 10:01:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.348 10:01:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.348 10:01:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.348 [2024-11-19 10:01:00.217194] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:46.348 [2024-11-19 10:01:00.217304] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58165 ] 00:06:46.608 [2024-11-19 10:01:00.362365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.608 [2024-11-19 10:01:00.407550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.608 [2024-11-19 10:01:00.407558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.608 [2024-11-19 10:01:00.459441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.878 10:01:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.878 10:01:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:46.878 10:01:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:47.137 Malloc0 00:06:47.137 10:01:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:47.396 Malloc1 00:06:47.396 10:01:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:47.396 10:01:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.396 10:01:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:47.396 10:01:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:47.396 10:01:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.396 10:01:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:47.396 10:01:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:47.396 10:01:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.396 10:01:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:47.396 10:01:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:47.396 10:01:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.396 10:01:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:47.396 10:01:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:47.396 10:01:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:47.396 10:01:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.396 10:01:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:47.656 /dev/nbd0 00:06:47.656 10:01:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:47.656 10:01:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:47.656 10:01:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:47.656 10:01:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:47.656 10:01:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.656 10:01:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.656 10:01:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:47.656 10:01:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:47.656 10:01:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.656 10:01:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.656 10:01:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:47.656 1+0 records in 00:06:47.656 1+0 records out 00:06:47.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265328 s, 15.4 MB/s 00:06:47.656 10:01:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:47.656 10:01:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:47.656 10:01:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:47.656 10:01:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.656 10:01:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:47.656 10:01:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.656 10:01:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.656 10:01:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:47.915 /dev/nbd1 00:06:47.915 10:01:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:47.915 10:01:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:47.915 10:01:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:47.915 10:01:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:47.915 10:01:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.915 10:01:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.915 10:01:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:47.916 10:01:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:47.916 10:01:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.916 10:01:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.916 10:01:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:47.916 1+0 records in 00:06:47.916 1+0 records out 00:06:47.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302795 s, 13.5 MB/s 00:06:47.916 10:01:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:47.916 10:01:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:47.916 10:01:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:47.916 10:01:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.916 10:01:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:47.916 10:01:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.916 10:01:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.916 10:01:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.916 10:01:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.916 10:01:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:48.174 10:01:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:48.174 { 00:06:48.174 "nbd_device": "/dev/nbd0", 00:06:48.174 "bdev_name": "Malloc0" 00:06:48.174 }, 00:06:48.174 { 00:06:48.174 "nbd_device": "/dev/nbd1", 00:06:48.174 "bdev_name": "Malloc1" 00:06:48.174 } 00:06:48.174 ]' 00:06:48.174 10:01:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:48.174 { 00:06:48.174 "nbd_device": "/dev/nbd0", 00:06:48.174 "bdev_name": "Malloc0" 00:06:48.174 }, 00:06:48.174 { 00:06:48.175 "nbd_device": "/dev/nbd1", 00:06:48.175 "bdev_name": "Malloc1" 00:06:48.175 } 00:06:48.175 ]' 00:06:48.175 10:01:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:48.175 10:01:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:48.175 /dev/nbd1' 00:06:48.175 10:01:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:48.175 /dev/nbd1' 00:06:48.175 10:01:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.175 10:01:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:48.175 10:01:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:48.175 10:01:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:48.175 10:01:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:48.175 10:01:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:48.175 10:01:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.175 10:01:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:48.175 10:01:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:48.175 10:01:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:48.175 10:01:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:48.175 10:01:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:48.175 256+0 records in 00:06:48.175 256+0 records out 00:06:48.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00789035 s, 133 MB/s 00:06:48.175 10:01:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.175 10:01:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:48.434 256+0 records in 00:06:48.434 256+0 records out 00:06:48.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221631 s, 47.3 MB/s 00:06:48.434 10:01:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.434 10:01:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:48.434 256+0 records in 00:06:48.434 256+0 records out 00:06:48.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222633 s, 47.1 MB/s 00:06:48.434 10:01:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:48.434 10:01:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.434 10:01:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:48.434 10:01:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:48.434 10:01:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:48.434 10:01:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:48.434 10:01:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:48.434 10:01:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:48.434 10:01:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:48.434 10:01:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:48.434 10:01:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:48.434 10:01:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:48.434 10:01:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:48.434 10:01:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.434 10:01:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.434 10:01:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:48.434 10:01:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:48.434 10:01:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.434 10:01:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:48.693 10:01:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:48.693 10:01:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:48.693 10:01:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:48.693 10:01:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.693 10:01:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.693 10:01:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:48.693 10:01:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:48.693 10:01:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.693 10:01:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.693 10:01:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:48.954 10:01:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:48.954 10:01:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:48.954 10:01:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:48.954 10:01:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.954 10:01:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.954 10:01:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:48.954 10:01:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:48.954 10:01:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.954 10:01:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:48.954 10:01:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.954 10:01:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.214 10:01:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:49.214 10:01:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:49.214 10:01:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.214 10:01:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:49.214 10:01:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:49.214 10:01:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.214 10:01:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:49.214 10:01:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:49.214 10:01:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:49.214 10:01:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:49.214 10:01:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:49.214 10:01:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:49.214 10:01:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:49.473 10:01:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:49.731 [2024-11-19 10:01:03.427079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:49.731 [2024-11-19 10:01:03.460797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.731 [2024-11-19 10:01:03.460807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.731 [2024-11-19 10:01:03.515647] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.731 [2024-11-19 10:01:03.515765] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:49.731 [2024-11-19 10:01:03.515778] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:53.019 spdk_app_start Round 1 00:06:53.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:53.019 10:01:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:53.019 10:01:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:53.019 10:01:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58165 /var/tmp/spdk-nbd.sock 00:06:53.020 10:01:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58165 ']' 00:06:53.020 10:01:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:53.020 10:01:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.020 10:01:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:53.020 10:01:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.020 10:01:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:53.020 10:01:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.020 10:01:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:53.020 10:01:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:53.020 Malloc0 00:06:53.020 10:01:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:53.280 Malloc1 00:06:53.280 10:01:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:53.280 10:01:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.280 10:01:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:53.280 10:01:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:53.280 10:01:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.280 10:01:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:53.280 10:01:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:53.280 10:01:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.280 10:01:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:53.280 10:01:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:53.280 10:01:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.280 10:01:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:53.280 10:01:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:53.280 10:01:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:53.280 10:01:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.280 10:01:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:53.539 /dev/nbd0 00:06:53.539 10:01:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:53.539 10:01:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:53.539 10:01:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:53.539 10:01:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:53.539 10:01:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:53.539 10:01:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:53.539 10:01:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:53.539 10:01:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:53.539 10:01:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:53.539 10:01:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:53.539 10:01:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:53.798 1+0 records in 00:06:53.798 1+0 records out 00:06:53.799 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320272 s, 12.8 MB/s 00:06:53.799 10:01:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:53.799 10:01:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:53.799 10:01:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:53.799 10:01:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:53.799 10:01:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:53.799 10:01:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.799 10:01:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.799 10:01:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:54.058 /dev/nbd1 00:06:54.058 10:01:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:54.058 10:01:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:54.058 10:01:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:54.058 10:01:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:54.058 10:01:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.058 10:01:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.058 10:01:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:54.058 10:01:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:54.058 10:01:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.058 10:01:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.058 10:01:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.058 1+0 records in 00:06:54.058 1+0 records out 00:06:54.058 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311602 s, 13.1 MB/s 00:06:54.058 10:01:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:54.058 10:01:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:54.058 10:01:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:54.058 10:01:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.058 10:01:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:54.058 10:01:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.058 10:01:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.058 10:01:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.058 10:01:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.058 10:01:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:54.318 { 00:06:54.318 "nbd_device": "/dev/nbd0", 00:06:54.318 "bdev_name": "Malloc0" 00:06:54.318 }, 00:06:54.318 { 00:06:54.318 "nbd_device": "/dev/nbd1", 00:06:54.318 "bdev_name": "Malloc1" 00:06:54.318 } 00:06:54.318 ]' 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:54.318 { 00:06:54.318 "nbd_device": "/dev/nbd0", 00:06:54.318 "bdev_name": "Malloc0" 00:06:54.318 }, 00:06:54.318 { 00:06:54.318 "nbd_device": "/dev/nbd1", 00:06:54.318 "bdev_name": "Malloc1" 00:06:54.318 } 00:06:54.318 ]' 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:54.318 /dev/nbd1' 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:54.318 /dev/nbd1' 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:54.318 256+0 records in 00:06:54.318 256+0 records out 00:06:54.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103644 s, 101 MB/s 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:54.318 256+0 records in 00:06:54.318 256+0 records out 00:06:54.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263743 s, 39.8 MB/s 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:54.318 256+0 records in 00:06:54.318 256+0 records out 00:06:54.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272755 s, 38.4 MB/s 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:54.318 10:01:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:54.577 10:01:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:54.577 10:01:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.577 10:01:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.577 10:01:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:54.577 10:01:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:54.577 10:01:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.577 10:01:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:54.837 10:01:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:54.837 10:01:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:54.837 10:01:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:54.837 10:01:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.837 10:01:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.837 10:01:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:54.837 10:01:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:54.837 10:01:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.837 10:01:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.837 10:01:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:55.096 10:01:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:55.096 10:01:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:55.096 10:01:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:55.096 10:01:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.096 10:01:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.096 10:01:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:55.096 10:01:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:55.096 10:01:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.096 10:01:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.096 10:01:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.096 10:01:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.355 10:01:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:55.355 10:01:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:55.355 10:01:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.355 10:01:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:55.355 10:01:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:55.355 10:01:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.355 10:01:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:55.355 10:01:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:55.355 10:01:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:55.355 10:01:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:55.355 10:01:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:55.355 10:01:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:55.355 10:01:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:55.614 10:01:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:55.872 [2024-11-19 10:01:09.563974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:55.872 [2024-11-19 10:01:09.609648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.872 [2024-11-19 10:01:09.609654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.872 [2024-11-19 10:01:09.662757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.872 [2024-11-19 10:01:09.662855] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:55.872 [2024-11-19 10:01:09.662884] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:59.160 spdk_app_start Round 2 00:06:59.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:59.160 10:01:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:59.160 10:01:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:59.160 10:01:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58165 /var/tmp/spdk-nbd.sock 00:06:59.160 10:01:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58165 ']' 00:06:59.160 10:01:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:59.160 10:01:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.160 10:01:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:59.160 10:01:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.160 10:01:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:59.160 10:01:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.160 10:01:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:59.161 10:01:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:59.161 Malloc0 00:06:59.161 10:01:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:59.421 Malloc1 00:06:59.421 10:01:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:59.421 10:01:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.421 10:01:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:59.421 10:01:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:59.421 10:01:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.421 10:01:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:59.421 10:01:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:59.421 10:01:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.421 10:01:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:59.421 10:01:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:59.421 10:01:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.421 10:01:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:59.421 10:01:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:59.421 10:01:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:59.421 10:01:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.422 10:01:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:59.681 /dev/nbd0 00:06:59.681 10:01:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:59.681 10:01:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:59.681 10:01:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:59.681 10:01:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:59.681 10:01:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.681 10:01:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.681 10:01:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:59.681 10:01:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:59.681 10:01:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.681 10:01:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.681 10:01:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.681 1+0 records in 00:06:59.681 1+0 records out 00:06:59.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225591 s, 18.2 MB/s 00:06:59.681 10:01:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:59.682 10:01:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:59.682 10:01:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:59.682 10:01:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.682 10:01:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:59.682 10:01:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.682 10:01:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.682 10:01:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:00.250 /dev/nbd1 00:07:00.250 10:01:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:00.250 10:01:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:00.250 10:01:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:00.250 10:01:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:00.250 10:01:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:00.250 10:01:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:00.250 10:01:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:00.250 10:01:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:00.250 10:01:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:00.250 10:01:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:00.250 10:01:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:00.250 1+0 records in 00:07:00.250 1+0 records out 00:07:00.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379406 s, 10.8 MB/s 00:07:00.250 10:01:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:00.250 10:01:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:00.250 10:01:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:00.250 10:01:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:00.250 10:01:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:00.250 10:01:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.250 10:01:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.250 10:01:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.250 10:01:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.251 10:01:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:00.510 { 00:07:00.510 "nbd_device": "/dev/nbd0", 00:07:00.510 "bdev_name": "Malloc0" 00:07:00.510 }, 00:07:00.510 { 00:07:00.510 "nbd_device": "/dev/nbd1", 00:07:00.510 "bdev_name": "Malloc1" 00:07:00.510 } 00:07:00.510 ]' 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:00.510 { 00:07:00.510 "nbd_device": "/dev/nbd0", 00:07:00.510 "bdev_name": "Malloc0" 00:07:00.510 }, 00:07:00.510 { 00:07:00.510 "nbd_device": "/dev/nbd1", 00:07:00.510 "bdev_name": "Malloc1" 00:07:00.510 } 00:07:00.510 ]' 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:00.510 /dev/nbd1' 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:00.510 /dev/nbd1' 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:00.510 256+0 records in 00:07:00.510 256+0 records out 00:07:00.510 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110075 s, 95.3 MB/s 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:00.510 256+0 records in 00:07:00.510 256+0 records out 00:07:00.510 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246903 s, 42.5 MB/s 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:00.510 256+0 records in 00:07:00.510 256+0 records out 00:07:00.510 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030833 s, 34.0 MB/s 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.510 10:01:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:00.769 10:01:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:00.769 10:01:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:00.769 10:01:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:00.769 10:01:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.769 10:01:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.769 10:01:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:00.769 10:01:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.769 10:01:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.769 10:01:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.769 10:01:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:01.337 10:01:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:01.337 10:01:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:01.337 10:01:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:01.337 10:01:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.337 10:01:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.337 10:01:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:01.337 10:01:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:01.337 10:01:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.337 10:01:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.337 10:01:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.337 10:01:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.596 10:01:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:01.596 10:01:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:01.596 10:01:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.596 10:01:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:01.596 10:01:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:01.596 10:01:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.596 10:01:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:01.596 10:01:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:01.596 10:01:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:01.596 10:01:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:01.596 10:01:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:01.596 10:01:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:01.596 10:01:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:01.855 10:01:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:02.115 [2024-11-19 10:01:15.813760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.115 [2024-11-19 10:01:15.876583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.115 [2024-11-19 10:01:15.876601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.115 [2024-11-19 10:01:15.949482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.115 [2024-11-19 10:01:15.949593] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:02.115 [2024-11-19 10:01:15.949607] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:04.723 10:01:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58165 /var/tmp/spdk-nbd.sock 00:07:04.723 10:01:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58165 ']' 00:07:04.723 10:01:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:04.723 10:01:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:04.723 10:01:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:04.723 10:01:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.723 10:01:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:04.983 10:01:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.983 10:01:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:04.983 10:01:18 event.app_repeat -- event/event.sh@39 -- # killprocess 58165 00:07:05.242 10:01:18 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58165 ']' 00:07:05.242 10:01:18 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58165 00:07:05.243 10:01:18 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:05.243 10:01:18 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.243 10:01:18 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58165 00:07:05.243 killing process with pid 58165 00:07:05.243 10:01:18 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.243 10:01:18 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.243 10:01:18 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58165' 00:07:05.243 10:01:18 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58165 00:07:05.243 10:01:18 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58165 00:07:05.243 spdk_app_start is called in Round 0. 00:07:05.243 Shutdown signal received, stop current app iteration 00:07:05.243 Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 reinitialization... 00:07:05.243 spdk_app_start is called in Round 1. 00:07:05.243 Shutdown signal received, stop current app iteration 00:07:05.243 Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 reinitialization... 00:07:05.243 spdk_app_start is called in Round 2. 00:07:05.243 Shutdown signal received, stop current app iteration 00:07:05.243 Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 reinitialization... 00:07:05.243 spdk_app_start is called in Round 3. 00:07:05.243 Shutdown signal received, stop current app iteration 00:07:05.501 10:01:19 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:05.501 10:01:19 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:05.501 00:07:05.501 real 0m18.952s 00:07:05.501 user 0m43.135s 00:07:05.501 sys 0m2.882s 00:07:05.501 10:01:19 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.501 ************************************ 00:07:05.501 END TEST app_repeat 00:07:05.501 ************************************ 00:07:05.501 10:01:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:05.501 10:01:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:05.501 10:01:19 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:05.501 10:01:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.501 10:01:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.501 10:01:19 event -- common/autotest_common.sh@10 -- # set +x 00:07:05.501 ************************************ 00:07:05.501 START TEST cpu_locks 00:07:05.501 ************************************ 00:07:05.501 10:01:19 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:05.501 * Looking for test storage... 00:07:05.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:05.501 10:01:19 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:05.501 10:01:19 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:05.501 10:01:19 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:05.501 10:01:19 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.501 10:01:19 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:05.501 10:01:19 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.502 10:01:19 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:05.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.502 --rc genhtml_branch_coverage=1 00:07:05.502 --rc genhtml_function_coverage=1 00:07:05.502 --rc genhtml_legend=1 00:07:05.502 --rc geninfo_all_blocks=1 00:07:05.502 --rc geninfo_unexecuted_blocks=1 00:07:05.502 00:07:05.502 ' 00:07:05.502 10:01:19 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:05.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.502 --rc genhtml_branch_coverage=1 00:07:05.502 --rc genhtml_function_coverage=1 00:07:05.502 --rc genhtml_legend=1 00:07:05.502 --rc geninfo_all_blocks=1 00:07:05.502 --rc geninfo_unexecuted_blocks=1 00:07:05.502 00:07:05.502 ' 00:07:05.502 10:01:19 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:05.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.502 --rc genhtml_branch_coverage=1 00:07:05.502 --rc genhtml_function_coverage=1 00:07:05.502 --rc genhtml_legend=1 00:07:05.502 --rc geninfo_all_blocks=1 00:07:05.502 --rc geninfo_unexecuted_blocks=1 00:07:05.502 00:07:05.502 ' 00:07:05.502 10:01:19 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:05.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.502 --rc genhtml_branch_coverage=1 00:07:05.502 --rc genhtml_function_coverage=1 00:07:05.502 --rc genhtml_legend=1 00:07:05.502 --rc geninfo_all_blocks=1 00:07:05.502 --rc geninfo_unexecuted_blocks=1 00:07:05.502 00:07:05.502 ' 00:07:05.502 10:01:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:05.502 10:01:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:05.502 10:01:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:05.502 10:01:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:05.502 10:01:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.502 10:01:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.502 10:01:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.761 ************************************ 00:07:05.761 START TEST default_locks 00:07:05.761 ************************************ 00:07:05.761 10:01:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:05.761 10:01:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58598 00:07:05.761 10:01:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58598 00:07:05.761 10:01:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58598 ']' 00:07:05.761 10:01:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:05.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.761 10:01:19 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.761 10:01:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.761 10:01:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.761 10:01:19 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.761 10:01:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.761 [2024-11-19 10:01:19.449277] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:05.761 [2024-11-19 10:01:19.449374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58598 ] 00:07:05.761 [2024-11-19 10:01:19.590871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.020 [2024-11-19 10:01:19.661241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.020 [2024-11-19 10:01:19.754301] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.280 10:01:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.280 10:01:19 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:06.280 10:01:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58598 00:07:06.280 10:01:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58598 00:07:06.280 10:01:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:06.539 10:01:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58598 00:07:06.539 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58598 ']' 00:07:06.539 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58598 00:07:06.539 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:06.539 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.539 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58598 00:07:06.539 killing process with pid 58598 00:07:06.539 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.539 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.539 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58598' 00:07:06.539 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58598 00:07:06.539 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58598 00:07:06.798 10:01:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58598 00:07:06.798 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58598 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58598 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58598 ']' 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.799 ERROR: process (pid: 58598) is no longer running 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.799 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58598) - No such process 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:06.799 ************************************ 00:07:06.799 END TEST default_locks 00:07:06.799 ************************************ 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:06.799 00:07:06.799 real 0m1.260s 00:07:06.799 user 0m1.167s 00:07:06.799 sys 0m0.499s 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.799 10:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.058 10:01:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:07.058 10:01:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.058 10:01:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.058 10:01:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.058 ************************************ 00:07:07.058 START TEST default_locks_via_rpc 00:07:07.058 ************************************ 00:07:07.058 10:01:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:07.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.058 10:01:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58648 00:07:07.058 10:01:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58648 00:07:07.058 10:01:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58648 ']' 00:07:07.058 10:01:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:07.058 10:01:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.058 10:01:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.058 10:01:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.058 10:01:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.058 10:01:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.058 [2024-11-19 10:01:20.769262] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:07.058 [2024-11-19 10:01:20.769822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58648 ] 00:07:07.058 [2024-11-19 10:01:20.916000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.317 [2024-11-19 10:01:20.965989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.317 [2024-11-19 10:01:21.035849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.885 10:01:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.885 10:01:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:07.885 10:01:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:07.885 10:01:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.885 10:01:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.885 10:01:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.885 10:01:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:07.885 10:01:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:07.885 10:01:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:07.885 10:01:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:07.885 10:01:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:07.885 10:01:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.885 10:01:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.885 10:01:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.885 10:01:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58648 00:07:07.885 10:01:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58648 00:07:07.885 10:01:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:08.452 10:01:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58648 00:07:08.452 10:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58648 ']' 00:07:08.452 10:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58648 00:07:08.452 10:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:08.452 10:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.452 10:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58648 00:07:08.452 killing process with pid 58648 00:07:08.452 10:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.452 10:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.452 10:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58648' 00:07:08.452 10:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58648 00:07:08.452 10:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58648 00:07:08.711 00:07:08.711 real 0m1.837s 00:07:08.711 user 0m1.956s 00:07:08.711 sys 0m0.554s 00:07:08.711 10:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.711 ************************************ 00:07:08.711 END TEST default_locks_via_rpc 00:07:08.711 ************************************ 00:07:08.711 10:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.711 10:01:22 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:08.711 10:01:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.711 10:01:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.711 10:01:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.711 ************************************ 00:07:08.711 START TEST non_locking_app_on_locked_coremask 00:07:08.711 ************************************ 00:07:08.711 10:01:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:08.711 10:01:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58694 00:07:08.711 10:01:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58694 /var/tmp/spdk.sock 00:07:08.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.711 10:01:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58694 ']' 00:07:08.711 10:01:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:08.711 10:01:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.711 10:01:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.711 10:01:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.711 10:01:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.711 10:01:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.969 [2024-11-19 10:01:22.656562] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:08.969 [2024-11-19 10:01:22.656676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58694 ] 00:07:08.969 [2024-11-19 10:01:22.803851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.227 [2024-11-19 10:01:22.862350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.227 [2024-11-19 10:01:22.933248] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.486 10:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.486 10:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:09.486 10:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58702 00:07:09.486 10:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:09.486 10:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58702 /var/tmp/spdk2.sock 00:07:09.486 10:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58702 ']' 00:07:09.486 10:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.486 10:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.486 10:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.486 10:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.486 10:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.486 [2024-11-19 10:01:23.202392] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:09.486 [2024-11-19 10:01:23.203297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58702 ] 00:07:09.486 [2024-11-19 10:01:23.360351] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.486 [2024-11-19 10:01:23.360410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.746 [2024-11-19 10:01:23.476176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.746 [2024-11-19 10:01:23.625542] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.684 10:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.684 10:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:10.684 10:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58694 00:07:10.684 10:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58694 00:07:10.684 10:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.622 10:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58694 00:07:11.622 10:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58694 ']' 00:07:11.622 10:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58694 00:07:11.622 10:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:11.622 10:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.622 10:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58694 00:07:11.622 killing process with pid 58694 00:07:11.622 10:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.622 10:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.622 10:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58694' 00:07:11.622 10:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58694 00:07:11.622 10:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58694 00:07:12.190 10:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58702 00:07:12.190 10:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58702 ']' 00:07:12.190 10:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58702 00:07:12.190 10:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:12.190 10:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.190 10:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58702 00:07:12.190 killing process with pid 58702 00:07:12.190 10:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.190 10:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.190 10:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58702' 00:07:12.190 10:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58702 00:07:12.190 10:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58702 00:07:12.449 ************************************ 00:07:12.449 END TEST non_locking_app_on_locked_coremask 00:07:12.449 ************************************ 00:07:12.449 00:07:12.449 real 0m3.737s 00:07:12.449 user 0m4.075s 00:07:12.449 sys 0m1.144s 00:07:12.449 10:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.449 10:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.708 10:01:26 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:12.708 10:01:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.708 10:01:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.708 10:01:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.708 ************************************ 00:07:12.708 START TEST locking_app_on_unlocked_coremask 00:07:12.708 ************************************ 00:07:12.708 10:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:12.708 10:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58769 00:07:12.708 10:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58769 /var/tmp/spdk.sock 00:07:12.708 10:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:12.708 10:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58769 ']' 00:07:12.708 10:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.708 10:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.708 10:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.708 10:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.708 10:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.708 [2024-11-19 10:01:26.454904] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:12.708 [2024-11-19 10:01:26.455068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58769 ] 00:07:12.967 [2024-11-19 10:01:26.616371] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:12.967 [2024-11-19 10:01:26.616409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.967 [2024-11-19 10:01:26.666479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.967 [2024-11-19 10:01:26.734964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.905 10:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.905 10:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:13.905 10:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58785 00:07:13.905 10:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:13.905 10:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58785 /var/tmp/spdk2.sock 00:07:13.905 10:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58785 ']' 00:07:13.905 10:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.906 10:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.906 10:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.906 10:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.906 10:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.906 [2024-11-19 10:01:27.543434] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:13.906 [2024-11-19 10:01:27.544273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58785 ] 00:07:13.906 [2024-11-19 10:01:27.702356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.906 [2024-11-19 10:01:27.783462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.165 [2024-11-19 10:01:27.916683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.735 10:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.735 10:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:14.735 10:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58785 00:07:14.735 10:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:14.735 10:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58785 00:07:15.304 10:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58769 00:07:15.304 10:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58769 ']' 00:07:15.304 10:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58769 00:07:15.304 10:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:15.304 10:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.304 10:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58769 00:07:15.304 10:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.304 10:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.304 killing process with pid 58769 00:07:15.304 10:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58769' 00:07:15.304 10:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58769 00:07:15.304 10:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58769 00:07:16.240 10:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58785 00:07:16.240 10:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58785 ']' 00:07:16.240 10:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58785 00:07:16.240 10:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:16.240 10:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.240 10:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58785 00:07:16.240 killing process with pid 58785 00:07:16.240 10:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.240 10:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.240 10:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58785' 00:07:16.240 10:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58785 00:07:16.240 10:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58785 00:07:16.500 00:07:16.500 real 0m3.843s 00:07:16.500 user 0m4.336s 00:07:16.500 sys 0m0.997s 00:07:16.500 10:01:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.500 10:01:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.500 ************************************ 00:07:16.500 END TEST locking_app_on_unlocked_coremask 00:07:16.500 ************************************ 00:07:16.500 10:01:30 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:16.500 10:01:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.500 10:01:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.500 10:01:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.500 ************************************ 00:07:16.500 START TEST locking_app_on_locked_coremask 00:07:16.500 ************************************ 00:07:16.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.500 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:16.500 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58852 00:07:16.500 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58852 /var/tmp/spdk.sock 00:07:16.500 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:16.500 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58852 ']' 00:07:16.500 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.500 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.500 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.500 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.500 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.500 [2024-11-19 10:01:30.345750] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:16.500 [2024-11-19 10:01:30.346052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58852 ] 00:07:16.759 [2024-11-19 10:01:30.493126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.759 [2024-11-19 10:01:30.544249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.759 [2024-11-19 10:01:30.618298] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:17.069 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.069 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:17.069 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58861 00:07:17.069 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:17.069 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58861 /var/tmp/spdk2.sock 00:07:17.069 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:17.069 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58861 /var/tmp/spdk2.sock 00:07:17.069 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:17.069 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.069 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:17.069 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.069 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58861 /var/tmp/spdk2.sock 00:07:17.069 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58861 ']' 00:07:17.069 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:17.069 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:17.069 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:17.069 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.069 10:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.069 [2024-11-19 10:01:30.901802] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:17.069 [2024-11-19 10:01:30.901932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58861 ] 00:07:17.328 [2024-11-19 10:01:31.062844] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58852 has claimed it. 00:07:17.328 [2024-11-19 10:01:31.067080] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:17.894 ERROR: process (pid: 58861) is no longer running 00:07:17.894 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58861) - No such process 00:07:17.894 10:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.894 10:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:17.894 10:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:17.894 10:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:17.894 10:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:17.894 10:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:17.894 10:01:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58852 00:07:17.894 10:01:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58852 00:07:17.894 10:01:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.461 10:01:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58852 00:07:18.461 10:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58852 ']' 00:07:18.461 10:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58852 00:07:18.461 10:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:18.461 10:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.461 10:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58852 00:07:18.461 killing process with pid 58852 00:07:18.461 10:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.461 10:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.461 10:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58852' 00:07:18.461 10:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58852 00:07:18.461 10:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58852 00:07:18.720 ************************************ 00:07:18.720 END TEST locking_app_on_locked_coremask 00:07:18.720 ************************************ 00:07:18.720 00:07:18.720 real 0m2.163s 00:07:18.720 user 0m2.437s 00:07:18.720 sys 0m0.607s 00:07:18.720 10:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.720 10:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.720 10:01:32 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:18.720 10:01:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.720 10:01:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.720 10:01:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.720 ************************************ 00:07:18.720 START TEST locking_overlapped_coremask 00:07:18.720 ************************************ 00:07:18.720 10:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:18.721 10:01:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58906 00:07:18.721 10:01:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:18.721 10:01:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58906 /var/tmp/spdk.sock 00:07:18.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.721 10:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58906 ']' 00:07:18.721 10:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.721 10:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.721 10:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.721 10:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.721 10:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.721 [2024-11-19 10:01:32.553435] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:18.721 [2024-11-19 10:01:32.553712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58906 ] 00:07:18.979 [2024-11-19 10:01:32.693793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.979 [2024-11-19 10:01:32.741170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.979 [2024-11-19 10:01:32.741303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.979 [2024-11-19 10:01:32.741307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.979 [2024-11-19 10:01:32.814777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.239 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.239 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:19.239 10:01:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58917 00:07:19.239 10:01:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58917 /var/tmp/spdk2.sock 00:07:19.239 10:01:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:19.239 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:19.239 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58917 /var/tmp/spdk2.sock 00:07:19.239 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:19.239 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.239 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:19.239 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.239 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58917 /var/tmp/spdk2.sock 00:07:19.239 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58917 ']' 00:07:19.239 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.239 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.239 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.239 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.239 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.239 [2024-11-19 10:01:33.079886] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:19.239 [2024-11-19 10:01:33.080201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58917 ] 00:07:19.498 [2024-11-19 10:01:33.244448] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58906 has claimed it. 00:07:19.498 [2024-11-19 10:01:33.244535] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:20.065 ERROR: process (pid: 58917) is no longer running 00:07:20.065 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58917) - No such process 00:07:20.065 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.065 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:20.065 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:20.065 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:20.065 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:20.065 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:20.065 10:01:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:20.065 10:01:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:20.065 10:01:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:20.065 10:01:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:20.065 10:01:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58906 00:07:20.065 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58906 ']' 00:07:20.065 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58906 00:07:20.065 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:20.065 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.065 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58906 00:07:20.065 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.065 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.065 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58906' 00:07:20.065 killing process with pid 58906 00:07:20.065 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58906 00:07:20.065 10:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58906 00:07:20.634 00:07:20.634 real 0m1.727s 00:07:20.634 user 0m4.710s 00:07:20.634 sys 0m0.424s 00:07:20.634 10:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.634 ************************************ 00:07:20.634 END TEST locking_overlapped_coremask 00:07:20.634 ************************************ 00:07:20.634 10:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.634 10:01:34 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:20.634 10:01:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.634 10:01:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.634 10:01:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.634 ************************************ 00:07:20.634 START TEST locking_overlapped_coremask_via_rpc 00:07:20.634 ************************************ 00:07:20.634 10:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:20.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.634 10:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58962 00:07:20.634 10:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58962 /var/tmp/spdk.sock 00:07:20.634 10:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:20.634 10:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58962 ']' 00:07:20.634 10:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.634 10:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.634 10:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.634 10:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.634 10:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.634 [2024-11-19 10:01:34.346846] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:20.634 [2024-11-19 10:01:34.347178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58962 ] 00:07:20.634 [2024-11-19 10:01:34.493611] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:20.634 [2024-11-19 10:01:34.493795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.893 [2024-11-19 10:01:34.539854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.893 [2024-11-19 10:01:34.540004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.893 [2024-11-19 10:01:34.540008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.893 [2024-11-19 10:01:34.614842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.153 10:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.153 10:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:21.153 10:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58967 00:07:21.153 10:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58967 /var/tmp/spdk2.sock 00:07:21.153 10:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:21.153 10:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58967 ']' 00:07:21.153 10:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.153 10:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.153 10:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.153 10:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.153 10:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.153 [2024-11-19 10:01:34.878708] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:21.153 [2024-11-19 10:01:34.878981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58967 ] 00:07:21.153 [2024-11-19 10:01:35.036473] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:21.153 [2024-11-19 10:01:35.036515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.411 [2024-11-19 10:01:35.153204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.412 [2024-11-19 10:01:35.153266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.412 [2024-11-19 10:01:35.153267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:21.670 [2024-11-19 10:01:35.300380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.239 [2024-11-19 10:01:35.928088] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58962 has claimed it. 00:07:22.239 request: 00:07:22.239 { 00:07:22.239 "method": "framework_enable_cpumask_locks", 00:07:22.239 "req_id": 1 00:07:22.239 } 00:07:22.239 Got JSON-RPC error response 00:07:22.239 response: 00:07:22.239 { 00:07:22.239 "code": -32603, 00:07:22.239 "message": "Failed to claim CPU core: 2" 00:07:22.239 } 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58962 /var/tmp/spdk.sock 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58962 ']' 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.239 10:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.498 10:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.498 10:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:22.498 10:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58967 /var/tmp/spdk2.sock 00:07:22.498 10:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58967 ']' 00:07:22.498 10:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:22.498 10:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.498 10:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:22.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:22.498 10:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.498 10:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.757 10:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.757 ************************************ 00:07:22.757 END TEST locking_overlapped_coremask_via_rpc 00:07:22.757 ************************************ 00:07:22.757 10:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:22.757 10:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:22.757 10:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:22.757 10:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:22.757 10:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:22.757 00:07:22.757 real 0m2.228s 00:07:22.757 user 0m1.265s 00:07:22.757 sys 0m0.178s 00:07:22.757 10:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.757 10:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.757 10:01:36 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:22.757 10:01:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58962 ]] 00:07:22.757 10:01:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58962 00:07:22.757 10:01:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58962 ']' 00:07:22.757 10:01:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58962 00:07:22.757 10:01:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:22.757 10:01:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.757 10:01:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58962 00:07:22.757 killing process with pid 58962 00:07:22.757 10:01:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.757 10:01:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.757 10:01:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58962' 00:07:22.757 10:01:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58962 00:07:22.757 10:01:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58962 00:07:23.325 10:01:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58967 ]] 00:07:23.325 10:01:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58967 00:07:23.325 10:01:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58967 ']' 00:07:23.325 10:01:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58967 00:07:23.325 10:01:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:23.325 10:01:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.325 10:01:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58967 00:07:23.325 killing process with pid 58967 00:07:23.325 10:01:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:23.325 10:01:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:23.325 10:01:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58967' 00:07:23.325 10:01:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58967 00:07:23.325 10:01:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58967 00:07:23.584 10:01:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:23.584 Process with pid 58962 is not found 00:07:23.584 Process with pid 58967 is not found 00:07:23.584 10:01:37 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:23.584 10:01:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58962 ]] 00:07:23.584 10:01:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58962 00:07:23.584 10:01:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58962 ']' 00:07:23.584 10:01:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58962 00:07:23.584 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58962) - No such process 00:07:23.584 10:01:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58962 is not found' 00:07:23.584 10:01:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58967 ]] 00:07:23.584 10:01:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58967 00:07:23.584 10:01:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58967 ']' 00:07:23.584 10:01:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58967 00:07:23.584 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58967) - No such process 00:07:23.584 10:01:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58967 is not found' 00:07:23.584 10:01:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:23.584 ************************************ 00:07:23.584 END TEST cpu_locks 00:07:23.584 ************************************ 00:07:23.584 00:07:23.584 real 0m18.190s 00:07:23.584 user 0m31.625s 00:07:23.584 sys 0m5.339s 00:07:23.584 10:01:37 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.584 10:01:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.584 ************************************ 00:07:23.584 END TEST event 00:07:23.584 ************************************ 00:07:23.584 00:07:23.584 real 0m45.344s 00:07:23.584 user 1m26.975s 00:07:23.584 sys 0m9.006s 00:07:23.584 10:01:37 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.584 10:01:37 event -- common/autotest_common.sh@10 -- # set +x 00:07:23.584 10:01:37 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:23.584 10:01:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.584 10:01:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.584 10:01:37 -- common/autotest_common.sh@10 -- # set +x 00:07:23.584 ************************************ 00:07:23.584 START TEST thread 00:07:23.584 ************************************ 00:07:23.584 10:01:37 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:23.843 * Looking for test storage... 00:07:23.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:23.843 10:01:37 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:23.843 10:01:37 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:23.843 10:01:37 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:23.843 10:01:37 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:23.843 10:01:37 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.843 10:01:37 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.843 10:01:37 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.843 10:01:37 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.843 10:01:37 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.843 10:01:37 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.843 10:01:37 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.843 10:01:37 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.843 10:01:37 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.843 10:01:37 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.843 10:01:37 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.843 10:01:37 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:23.843 10:01:37 thread -- scripts/common.sh@345 -- # : 1 00:07:23.843 10:01:37 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.843 10:01:37 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.843 10:01:37 thread -- scripts/common.sh@365 -- # decimal 1 00:07:23.843 10:01:37 thread -- scripts/common.sh@353 -- # local d=1 00:07:23.843 10:01:37 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.843 10:01:37 thread -- scripts/common.sh@355 -- # echo 1 00:07:23.843 10:01:37 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.843 10:01:37 thread -- scripts/common.sh@366 -- # decimal 2 00:07:23.843 10:01:37 thread -- scripts/common.sh@353 -- # local d=2 00:07:23.843 10:01:37 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.843 10:01:37 thread -- scripts/common.sh@355 -- # echo 2 00:07:23.843 10:01:37 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.843 10:01:37 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.843 10:01:37 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.843 10:01:37 thread -- scripts/common.sh@368 -- # return 0 00:07:23.843 10:01:37 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.843 10:01:37 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:23.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.843 --rc genhtml_branch_coverage=1 00:07:23.843 --rc genhtml_function_coverage=1 00:07:23.843 --rc genhtml_legend=1 00:07:23.843 --rc geninfo_all_blocks=1 00:07:23.843 --rc geninfo_unexecuted_blocks=1 00:07:23.843 00:07:23.843 ' 00:07:23.843 10:01:37 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:23.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.843 --rc genhtml_branch_coverage=1 00:07:23.843 --rc genhtml_function_coverage=1 00:07:23.843 --rc genhtml_legend=1 00:07:23.843 --rc geninfo_all_blocks=1 00:07:23.843 --rc geninfo_unexecuted_blocks=1 00:07:23.843 00:07:23.843 ' 00:07:23.843 10:01:37 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:23.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.843 --rc genhtml_branch_coverage=1 00:07:23.843 --rc genhtml_function_coverage=1 00:07:23.843 --rc genhtml_legend=1 00:07:23.843 --rc geninfo_all_blocks=1 00:07:23.843 --rc geninfo_unexecuted_blocks=1 00:07:23.843 00:07:23.843 ' 00:07:23.843 10:01:37 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:23.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.843 --rc genhtml_branch_coverage=1 00:07:23.843 --rc genhtml_function_coverage=1 00:07:23.843 --rc genhtml_legend=1 00:07:23.843 --rc geninfo_all_blocks=1 00:07:23.843 --rc geninfo_unexecuted_blocks=1 00:07:23.843 00:07:23.843 ' 00:07:23.843 10:01:37 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:23.843 10:01:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:23.843 10:01:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.843 10:01:37 thread -- common/autotest_common.sh@10 -- # set +x 00:07:23.843 ************************************ 00:07:23.843 START TEST thread_poller_perf 00:07:23.843 ************************************ 00:07:23.843 10:01:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:23.844 [2024-11-19 10:01:37.684339] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:23.844 [2024-11-19 10:01:37.684613] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59103 ] 00:07:24.102 [2024-11-19 10:01:37.834060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.102 [2024-11-19 10:01:37.887236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.102 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:25.481 [2024-11-19T10:01:39.370Z] ====================================== 00:07:25.481 [2024-11-19T10:01:39.370Z] busy:2207019220 (cyc) 00:07:25.481 [2024-11-19T10:01:39.370Z] total_run_count: 377000 00:07:25.481 [2024-11-19T10:01:39.370Z] tsc_hz: 2200000000 (cyc) 00:07:25.481 [2024-11-19T10:01:39.370Z] ====================================== 00:07:25.481 [2024-11-19T10:01:39.370Z] poller_cost: 5854 (cyc), 2660 (nsec) 00:07:25.481 00:07:25.481 real 0m1.291s 00:07:25.481 user 0m1.127s 00:07:25.481 sys 0m0.057s 00:07:25.481 10:01:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.481 10:01:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:25.481 ************************************ 00:07:25.481 END TEST thread_poller_perf 00:07:25.481 ************************************ 00:07:25.481 10:01:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:25.481 10:01:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:25.481 10:01:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.481 10:01:38 thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.481 ************************************ 00:07:25.481 START TEST thread_poller_perf 00:07:25.481 ************************************ 00:07:25.481 10:01:39 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:25.481 [2024-11-19 10:01:39.024160] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:25.482 [2024-11-19 10:01:39.024285] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59139 ] 00:07:25.482 [2024-11-19 10:01:39.169016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.482 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:25.482 [2024-11-19 10:01:39.207697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.421 [2024-11-19T10:01:40.310Z] ====================================== 00:07:26.421 [2024-11-19T10:01:40.310Z] busy:2202206294 (cyc) 00:07:26.421 [2024-11-19T10:01:40.310Z] total_run_count: 5077000 00:07:26.421 [2024-11-19T10:01:40.310Z] tsc_hz: 2200000000 (cyc) 00:07:26.421 [2024-11-19T10:01:40.310Z] ====================================== 00:07:26.421 [2024-11-19T10:01:40.310Z] poller_cost: 433 (cyc), 196 (nsec) 00:07:26.421 00:07:26.421 real 0m1.248s 00:07:26.421 user 0m1.100s 00:07:26.421 sys 0m0.042s 00:07:26.421 10:01:40 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.421 10:01:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:26.421 ************************************ 00:07:26.421 END TEST thread_poller_perf 00:07:26.421 ************************************ 00:07:26.421 10:01:40 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:26.421 ************************************ 00:07:26.421 END TEST thread 00:07:26.421 ************************************ 00:07:26.421 00:07:26.421 real 0m2.834s 00:07:26.421 user 0m2.367s 00:07:26.421 sys 0m0.247s 00:07:26.421 10:01:40 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.421 10:01:40 thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.680 10:01:40 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:26.680 10:01:40 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:26.680 10:01:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.680 10:01:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.680 10:01:40 -- common/autotest_common.sh@10 -- # set +x 00:07:26.680 ************************************ 00:07:26.680 START TEST app_cmdline 00:07:26.680 ************************************ 00:07:26.680 10:01:40 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:26.680 * Looking for test storage... 00:07:26.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:26.680 10:01:40 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:26.680 10:01:40 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:26.680 10:01:40 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:26.680 10:01:40 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.680 10:01:40 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:26.680 10:01:40 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.680 10:01:40 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:26.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.680 --rc genhtml_branch_coverage=1 00:07:26.680 --rc genhtml_function_coverage=1 00:07:26.680 --rc genhtml_legend=1 00:07:26.680 --rc geninfo_all_blocks=1 00:07:26.680 --rc geninfo_unexecuted_blocks=1 00:07:26.680 00:07:26.680 ' 00:07:26.680 10:01:40 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:26.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.680 --rc genhtml_branch_coverage=1 00:07:26.680 --rc genhtml_function_coverage=1 00:07:26.680 --rc genhtml_legend=1 00:07:26.680 --rc geninfo_all_blocks=1 00:07:26.680 --rc geninfo_unexecuted_blocks=1 00:07:26.680 00:07:26.680 ' 00:07:26.680 10:01:40 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:26.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.680 --rc genhtml_branch_coverage=1 00:07:26.680 --rc genhtml_function_coverage=1 00:07:26.680 --rc genhtml_legend=1 00:07:26.680 --rc geninfo_all_blocks=1 00:07:26.680 --rc geninfo_unexecuted_blocks=1 00:07:26.680 00:07:26.680 ' 00:07:26.680 10:01:40 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:26.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.680 --rc genhtml_branch_coverage=1 00:07:26.680 --rc genhtml_function_coverage=1 00:07:26.680 --rc genhtml_legend=1 00:07:26.680 --rc geninfo_all_blocks=1 00:07:26.680 --rc geninfo_unexecuted_blocks=1 00:07:26.680 00:07:26.680 ' 00:07:26.680 10:01:40 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:26.680 10:01:40 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59221 00:07:26.680 10:01:40 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:26.681 10:01:40 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59221 00:07:26.681 10:01:40 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59221 ']' 00:07:26.681 10:01:40 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.681 10:01:40 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.681 10:01:40 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.681 10:01:40 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.681 10:01:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:26.939 [2024-11-19 10:01:40.611373] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:26.939 [2024-11-19 10:01:40.611510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59221 ] 00:07:26.939 [2024-11-19 10:01:40.758163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.939 [2024-11-19 10:01:40.804265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.198 [2024-11-19 10:01:40.873820] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.198 10:01:41 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.198 10:01:41 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:27.198 10:01:41 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:27.499 { 00:07:27.499 "version": "SPDK v25.01-pre git sha1 fc96810c2", 00:07:27.499 "fields": { 00:07:27.499 "major": 25, 00:07:27.499 "minor": 1, 00:07:27.499 "patch": 0, 00:07:27.499 "suffix": "-pre", 00:07:27.499 "commit": "fc96810c2" 00:07:27.499 } 00:07:27.499 } 00:07:27.499 10:01:41 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:27.499 10:01:41 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:27.499 10:01:41 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:27.499 10:01:41 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:27.499 10:01:41 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:27.499 10:01:41 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:27.499 10:01:41 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:27.499 10:01:41 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.499 10:01:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:27.757 10:01:41 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.757 10:01:41 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:27.757 10:01:41 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:27.757 10:01:41 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:27.757 10:01:41 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:27.757 10:01:41 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:27.757 10:01:41 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.757 10:01:41 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.757 10:01:41 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.757 10:01:41 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.757 10:01:41 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.757 10:01:41 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.757 10:01:41 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.757 10:01:41 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:27.757 10:01:41 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:27.757 request: 00:07:27.757 { 00:07:27.757 "method": "env_dpdk_get_mem_stats", 00:07:27.757 "req_id": 1 00:07:27.757 } 00:07:27.757 Got JSON-RPC error response 00:07:27.757 response: 00:07:27.757 { 00:07:27.757 "code": -32601, 00:07:27.757 "message": "Method not found" 00:07:27.757 } 00:07:28.015 10:01:41 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:28.015 10:01:41 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:28.015 10:01:41 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:28.015 10:01:41 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:28.015 10:01:41 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59221 00:07:28.015 10:01:41 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59221 ']' 00:07:28.015 10:01:41 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59221 00:07:28.015 10:01:41 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:28.015 10:01:41 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.015 10:01:41 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59221 00:07:28.015 10:01:41 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.015 10:01:41 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.015 killing process with pid 59221 00:07:28.015 10:01:41 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59221' 00:07:28.015 10:01:41 app_cmdline -- common/autotest_common.sh@973 -- # kill 59221 00:07:28.015 10:01:41 app_cmdline -- common/autotest_common.sh@978 -- # wait 59221 00:07:28.273 00:07:28.273 real 0m1.710s 00:07:28.273 user 0m2.071s 00:07:28.273 sys 0m0.458s 00:07:28.273 10:01:42 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.273 10:01:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.273 ************************************ 00:07:28.273 END TEST app_cmdline 00:07:28.273 ************************************ 00:07:28.273 10:01:42 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:28.273 10:01:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.273 10:01:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.273 10:01:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.273 ************************************ 00:07:28.273 START TEST version 00:07:28.273 ************************************ 00:07:28.273 10:01:42 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:28.531 * Looking for test storage... 00:07:28.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:28.531 10:01:42 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:28.531 10:01:42 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:28.531 10:01:42 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:28.531 10:01:42 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:28.531 10:01:42 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.531 10:01:42 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.531 10:01:42 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.531 10:01:42 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.531 10:01:42 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.531 10:01:42 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.531 10:01:42 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.531 10:01:42 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.531 10:01:42 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.531 10:01:42 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.531 10:01:42 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.531 10:01:42 version -- scripts/common.sh@344 -- # case "$op" in 00:07:28.531 10:01:42 version -- scripts/common.sh@345 -- # : 1 00:07:28.531 10:01:42 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.531 10:01:42 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.531 10:01:42 version -- scripts/common.sh@365 -- # decimal 1 00:07:28.531 10:01:42 version -- scripts/common.sh@353 -- # local d=1 00:07:28.531 10:01:42 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.532 10:01:42 version -- scripts/common.sh@355 -- # echo 1 00:07:28.532 10:01:42 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.532 10:01:42 version -- scripts/common.sh@366 -- # decimal 2 00:07:28.532 10:01:42 version -- scripts/common.sh@353 -- # local d=2 00:07:28.532 10:01:42 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.532 10:01:42 version -- scripts/common.sh@355 -- # echo 2 00:07:28.532 10:01:42 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.532 10:01:42 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.532 10:01:42 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.532 10:01:42 version -- scripts/common.sh@368 -- # return 0 00:07:28.532 10:01:42 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.532 10:01:42 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:28.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.532 --rc genhtml_branch_coverage=1 00:07:28.532 --rc genhtml_function_coverage=1 00:07:28.532 --rc genhtml_legend=1 00:07:28.532 --rc geninfo_all_blocks=1 00:07:28.532 --rc geninfo_unexecuted_blocks=1 00:07:28.532 00:07:28.532 ' 00:07:28.532 10:01:42 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:28.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.532 --rc genhtml_branch_coverage=1 00:07:28.532 --rc genhtml_function_coverage=1 00:07:28.532 --rc genhtml_legend=1 00:07:28.532 --rc geninfo_all_blocks=1 00:07:28.532 --rc geninfo_unexecuted_blocks=1 00:07:28.532 00:07:28.532 ' 00:07:28.532 10:01:42 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:28.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.532 --rc genhtml_branch_coverage=1 00:07:28.532 --rc genhtml_function_coverage=1 00:07:28.532 --rc genhtml_legend=1 00:07:28.532 --rc geninfo_all_blocks=1 00:07:28.532 --rc geninfo_unexecuted_blocks=1 00:07:28.532 00:07:28.532 ' 00:07:28.532 10:01:42 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:28.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.532 --rc genhtml_branch_coverage=1 00:07:28.532 --rc genhtml_function_coverage=1 00:07:28.532 --rc genhtml_legend=1 00:07:28.532 --rc geninfo_all_blocks=1 00:07:28.532 --rc geninfo_unexecuted_blocks=1 00:07:28.532 00:07:28.532 ' 00:07:28.532 10:01:42 version -- app/version.sh@17 -- # get_header_version major 00:07:28.532 10:01:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:28.532 10:01:42 version -- app/version.sh@14 -- # cut -f2 00:07:28.532 10:01:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.532 10:01:42 version -- app/version.sh@17 -- # major=25 00:07:28.532 10:01:42 version -- app/version.sh@18 -- # get_header_version minor 00:07:28.532 10:01:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:28.532 10:01:42 version -- app/version.sh@14 -- # cut -f2 00:07:28.532 10:01:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.532 10:01:42 version -- app/version.sh@18 -- # minor=1 00:07:28.532 10:01:42 version -- app/version.sh@19 -- # get_header_version patch 00:07:28.532 10:01:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:28.532 10:01:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.532 10:01:42 version -- app/version.sh@14 -- # cut -f2 00:07:28.532 10:01:42 version -- app/version.sh@19 -- # patch=0 00:07:28.532 10:01:42 version -- app/version.sh@20 -- # get_header_version suffix 00:07:28.532 10:01:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:28.532 10:01:42 version -- app/version.sh@14 -- # cut -f2 00:07:28.532 10:01:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.532 10:01:42 version -- app/version.sh@20 -- # suffix=-pre 00:07:28.532 10:01:42 version -- app/version.sh@22 -- # version=25.1 00:07:28.532 10:01:42 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:28.532 10:01:42 version -- app/version.sh@28 -- # version=25.1rc0 00:07:28.532 10:01:42 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:28.532 10:01:42 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:28.532 10:01:42 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:28.532 10:01:42 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:28.532 00:07:28.532 real 0m0.258s 00:07:28.532 user 0m0.175s 00:07:28.532 sys 0m0.117s 00:07:28.532 10:01:42 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.532 10:01:42 version -- common/autotest_common.sh@10 -- # set +x 00:07:28.532 ************************************ 00:07:28.532 END TEST version 00:07:28.532 ************************************ 00:07:28.532 10:01:42 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:28.532 10:01:42 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:28.532 10:01:42 -- spdk/autotest.sh@194 -- # uname -s 00:07:28.532 10:01:42 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:28.532 10:01:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:28.532 10:01:42 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:28.532 10:01:42 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:28.532 10:01:42 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:28.532 10:01:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.790 10:01:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.790 10:01:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.790 ************************************ 00:07:28.790 START TEST spdk_dd 00:07:28.790 ************************************ 00:07:28.790 10:01:42 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:28.790 * Looking for test storage... 00:07:28.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:28.790 10:01:42 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:28.790 10:01:42 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:07:28.790 10:01:42 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:28.790 10:01:42 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:28.790 10:01:42 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.790 10:01:42 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:28.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.790 --rc genhtml_branch_coverage=1 00:07:28.790 --rc genhtml_function_coverage=1 00:07:28.790 --rc genhtml_legend=1 00:07:28.790 --rc geninfo_all_blocks=1 00:07:28.790 --rc geninfo_unexecuted_blocks=1 00:07:28.790 00:07:28.790 ' 00:07:28.790 10:01:42 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:28.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.790 --rc genhtml_branch_coverage=1 00:07:28.790 --rc genhtml_function_coverage=1 00:07:28.790 --rc genhtml_legend=1 00:07:28.790 --rc geninfo_all_blocks=1 00:07:28.790 --rc geninfo_unexecuted_blocks=1 00:07:28.790 00:07:28.790 ' 00:07:28.790 10:01:42 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:28.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.790 --rc genhtml_branch_coverage=1 00:07:28.790 --rc genhtml_function_coverage=1 00:07:28.790 --rc genhtml_legend=1 00:07:28.790 --rc geninfo_all_blocks=1 00:07:28.790 --rc geninfo_unexecuted_blocks=1 00:07:28.790 00:07:28.790 ' 00:07:28.790 10:01:42 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:28.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.790 --rc genhtml_branch_coverage=1 00:07:28.790 --rc genhtml_function_coverage=1 00:07:28.790 --rc genhtml_legend=1 00:07:28.790 --rc geninfo_all_blocks=1 00:07:28.790 --rc geninfo_unexecuted_blocks=1 00:07:28.790 00:07:28.790 ' 00:07:28.790 10:01:42 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.790 10:01:42 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.791 10:01:42 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.791 10:01:42 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.791 10:01:42 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.791 10:01:42 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:28.791 10:01:42 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.791 10:01:42 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:29.360 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:29.360 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:29.360 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:29.360 10:01:43 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:29.360 10:01:43 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:29.360 10:01:43 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:29.360 10:01:43 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:29.360 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:29.361 * spdk_dd linked to liburing 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:29.361 10:01:43 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:29.361 10:01:43 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:29.362 10:01:43 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:07:29.362 10:01:43 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:29.362 10:01:43 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:29.362 10:01:43 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:29.362 10:01:43 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:29.362 10:01:43 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:29.362 10:01:43 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:29.362 10:01:43 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:29.362 10:01:43 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.362 10:01:43 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 ************************************ 00:07:29.362 START TEST spdk_dd_basic_rw 00:07:29.362 ************************************ 00:07:29.362 10:01:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:29.362 * Looking for test storage... 00:07:29.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:29.362 10:01:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:29.362 10:01:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:07:29.362 10:01:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:29.623 10:01:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:29.623 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.623 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.623 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.623 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.623 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.623 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.623 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.623 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.623 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.623 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.623 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.623 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:29.623 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:29.623 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.623 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.623 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:29.623 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:29.623 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.623 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:29.623 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.623 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:29.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.624 --rc genhtml_branch_coverage=1 00:07:29.624 --rc genhtml_function_coverage=1 00:07:29.624 --rc genhtml_legend=1 00:07:29.624 --rc geninfo_all_blocks=1 00:07:29.624 --rc geninfo_unexecuted_blocks=1 00:07:29.624 00:07:29.624 ' 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:29.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.624 --rc genhtml_branch_coverage=1 00:07:29.624 --rc genhtml_function_coverage=1 00:07:29.624 --rc genhtml_legend=1 00:07:29.624 --rc geninfo_all_blocks=1 00:07:29.624 --rc geninfo_unexecuted_blocks=1 00:07:29.624 00:07:29.624 ' 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:29.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.624 --rc genhtml_branch_coverage=1 00:07:29.624 --rc genhtml_function_coverage=1 00:07:29.624 --rc genhtml_legend=1 00:07:29.624 --rc geninfo_all_blocks=1 00:07:29.624 --rc geninfo_unexecuted_blocks=1 00:07:29.624 00:07:29.624 ' 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:29.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.624 --rc genhtml_branch_coverage=1 00:07:29.624 --rc genhtml_function_coverage=1 00:07:29.624 --rc genhtml_legend=1 00:07:29.624 --rc geninfo_all_blocks=1 00:07:29.624 --rc geninfo_unexecuted_blocks=1 00:07:29.624 00:07:29.624 ' 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:29.624 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:29.625 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:29.625 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:29.626 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:29.626 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:29.626 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:29.626 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:29.626 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:29.626 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:29.626 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:29.626 10:01:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:29.626 10:01:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:29.626 10:01:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.626 10:01:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:29.626 10:01:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:29.626 ************************************ 00:07:29.626 START TEST dd_bs_lt_native_bs 00:07:29.626 ************************************ 00:07:29.626 10:01:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:29.626 10:01:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:07:29.626 10:01:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:29.626 10:01:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.626 10:01:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.626 10:01:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.885 10:01:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.885 10:01:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.885 10:01:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.885 10:01:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.885 10:01:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:29.885 10:01:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:29.885 { 00:07:29.885 "subsystems": [ 00:07:29.885 { 00:07:29.885 "subsystem": "bdev", 00:07:29.885 "config": [ 00:07:29.885 { 00:07:29.885 "params": { 00:07:29.885 "trtype": "pcie", 00:07:29.885 "traddr": "0000:00:10.0", 00:07:29.885 "name": "Nvme0" 00:07:29.885 }, 00:07:29.885 "method": "bdev_nvme_attach_controller" 00:07:29.885 }, 00:07:29.885 { 00:07:29.885 "method": "bdev_wait_for_examine" 00:07:29.885 } 00:07:29.885 ] 00:07:29.885 } 00:07:29.885 ] 00:07:29.885 } 00:07:29.885 [2024-11-19 10:01:43.566861] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:29.885 [2024-11-19 10:01:43.566993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59560 ] 00:07:29.885 [2024-11-19 10:01:43.719544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.144 [2024-11-19 10:01:43.781813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.144 [2024-11-19 10:01:43.840665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.144 [2024-11-19 10:01:43.952165] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:30.144 [2024-11-19 10:01:43.952250] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.402 [2024-11-19 10:01:44.076926] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:30.402 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:07:30.402 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:30.402 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:07:30.402 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:07:30.402 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:07:30.402 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:30.402 00:07:30.402 real 0m0.625s 00:07:30.402 user 0m0.408s 00:07:30.402 sys 0m0.173s 00:07:30.402 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.402 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:30.402 ************************************ 00:07:30.402 END TEST dd_bs_lt_native_bs 00:07:30.402 ************************************ 00:07:30.403 10:01:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:30.403 10:01:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.403 10:01:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.403 10:01:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:30.403 ************************************ 00:07:30.403 START TEST dd_rw 00:07:30.403 ************************************ 00:07:30.403 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:07:30.403 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:30.403 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:30.403 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:30.403 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:30.403 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:30.403 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:30.403 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:30.403 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:30.403 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:30.403 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:30.403 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:30.403 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:30.403 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:30.403 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:30.403 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:30.403 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:30.403 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:30.403 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:30.970 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:30.970 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:30.970 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:30.970 10:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:30.970 { 00:07:30.970 "subsystems": [ 00:07:30.970 { 00:07:30.970 "subsystem": "bdev", 00:07:30.970 "config": [ 00:07:30.970 { 00:07:30.970 "params": { 00:07:30.970 "trtype": "pcie", 00:07:30.970 "traddr": "0000:00:10.0", 00:07:30.971 "name": "Nvme0" 00:07:30.971 }, 00:07:30.971 "method": "bdev_nvme_attach_controller" 00:07:30.971 }, 00:07:30.971 { 00:07:30.971 "method": "bdev_wait_for_examine" 00:07:30.971 } 00:07:30.971 ] 00:07:30.971 } 00:07:30.971 ] 00:07:30.971 } 00:07:30.971 [2024-11-19 10:01:44.838501] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:30.971 [2024-11-19 10:01:44.838612] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59596 ] 00:07:31.230 [2024-11-19 10:01:44.988077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.230 [2024-11-19 10:01:45.036109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.230 [2024-11-19 10:01:45.092008] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.489  [2024-11-19T10:01:45.378Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:31.489 00:07:31.747 10:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:31.747 10:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:31.747 10:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:31.747 10:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:31.747 { 00:07:31.747 "subsystems": [ 00:07:31.747 { 00:07:31.747 "subsystem": "bdev", 00:07:31.747 "config": [ 00:07:31.747 { 00:07:31.747 "params": { 00:07:31.747 "trtype": "pcie", 00:07:31.747 "traddr": "0000:00:10.0", 00:07:31.747 "name": "Nvme0" 00:07:31.747 }, 00:07:31.747 "method": "bdev_nvme_attach_controller" 00:07:31.747 }, 00:07:31.747 { 00:07:31.747 "method": "bdev_wait_for_examine" 00:07:31.747 } 00:07:31.747 ] 00:07:31.747 } 00:07:31.747 ] 00:07:31.747 } 00:07:31.747 [2024-11-19 10:01:45.437602] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:31.747 [2024-11-19 10:01:45.437702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59610 ] 00:07:31.747 [2024-11-19 10:01:45.581534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.747 [2024-11-19 10:01:45.627770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.006 [2024-11-19 10:01:45.686873] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.006  [2024-11-19T10:01:46.153Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:32.264 00:07:32.264 10:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:32.264 10:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:32.264 10:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:32.264 10:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:32.264 10:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:32.264 10:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:32.264 10:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:32.264 10:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:32.264 10:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:32.264 10:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:32.264 10:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:32.264 { 00:07:32.264 "subsystems": [ 00:07:32.264 { 00:07:32.264 "subsystem": "bdev", 00:07:32.264 "config": [ 00:07:32.264 { 00:07:32.264 "params": { 00:07:32.264 "trtype": "pcie", 00:07:32.264 "traddr": "0000:00:10.0", 00:07:32.264 "name": "Nvme0" 00:07:32.264 }, 00:07:32.264 "method": "bdev_nvme_attach_controller" 00:07:32.264 }, 00:07:32.264 { 00:07:32.264 "method": "bdev_wait_for_examine" 00:07:32.264 } 00:07:32.264 ] 00:07:32.264 } 00:07:32.265 ] 00:07:32.265 } 00:07:32.265 [2024-11-19 10:01:46.046793] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:32.265 [2024-11-19 10:01:46.046910] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59625 ] 00:07:32.524 [2024-11-19 10:01:46.193108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.524 [2024-11-19 10:01:46.241949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.524 [2024-11-19 10:01:46.296227] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.524  [2024-11-19T10:01:46.672Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:32.783 00:07:32.783 10:01:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:32.783 10:01:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:32.784 10:01:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:32.784 10:01:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:32.784 10:01:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:32.784 10:01:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:32.784 10:01:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:33.352 10:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:33.352 10:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:33.352 10:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:33.352 10:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:33.352 { 00:07:33.352 "subsystems": [ 00:07:33.352 { 00:07:33.352 "subsystem": "bdev", 00:07:33.352 "config": [ 00:07:33.352 { 00:07:33.352 "params": { 00:07:33.352 "trtype": "pcie", 00:07:33.352 "traddr": "0000:00:10.0", 00:07:33.352 "name": "Nvme0" 00:07:33.352 }, 00:07:33.352 "method": "bdev_nvme_attach_controller" 00:07:33.352 }, 00:07:33.352 { 00:07:33.352 "method": "bdev_wait_for_examine" 00:07:33.352 } 00:07:33.352 ] 00:07:33.352 } 00:07:33.352 ] 00:07:33.352 } 00:07:33.352 [2024-11-19 10:01:47.215377] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:33.352 [2024-11-19 10:01:47.215478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59650 ] 00:07:33.611 [2024-11-19 10:01:47.359554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.611 [2024-11-19 10:01:47.399780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.611 [2024-11-19 10:01:47.456014] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.869  [2024-11-19T10:01:47.758Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:33.869 00:07:34.128 10:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:34.128 10:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:34.128 10:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:34.128 10:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:34.128 [2024-11-19 10:01:47.804195] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:34.128 [2024-11-19 10:01:47.804285] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59662 ] 00:07:34.128 { 00:07:34.128 "subsystems": [ 00:07:34.128 { 00:07:34.128 "subsystem": "bdev", 00:07:34.128 "config": [ 00:07:34.128 { 00:07:34.128 "params": { 00:07:34.128 "trtype": "pcie", 00:07:34.128 "traddr": "0000:00:10.0", 00:07:34.128 "name": "Nvme0" 00:07:34.128 }, 00:07:34.128 "method": "bdev_nvme_attach_controller" 00:07:34.128 }, 00:07:34.128 { 00:07:34.128 "method": "bdev_wait_for_examine" 00:07:34.128 } 00:07:34.128 ] 00:07:34.128 } 00:07:34.128 ] 00:07:34.128 } 00:07:34.128 [2024-11-19 10:01:47.942714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.128 [2024-11-19 10:01:47.991098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.388 [2024-11-19 10:01:48.046101] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.388  [2024-11-19T10:01:48.536Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:34.647 00:07:34.647 10:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:34.648 10:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:34.648 10:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:34.648 10:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:34.648 10:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:34.648 10:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:34.648 10:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:34.648 10:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:34.648 10:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:34.648 10:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:34.648 10:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:34.648 [2024-11-19 10:01:48.412637] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:34.648 [2024-11-19 10:01:48.412745] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59679 ] 00:07:34.648 { 00:07:34.648 "subsystems": [ 00:07:34.648 { 00:07:34.648 "subsystem": "bdev", 00:07:34.648 "config": [ 00:07:34.648 { 00:07:34.648 "params": { 00:07:34.648 "trtype": "pcie", 00:07:34.648 "traddr": "0000:00:10.0", 00:07:34.648 "name": "Nvme0" 00:07:34.648 }, 00:07:34.648 "method": "bdev_nvme_attach_controller" 00:07:34.648 }, 00:07:34.648 { 00:07:34.648 "method": "bdev_wait_for_examine" 00:07:34.648 } 00:07:34.648 ] 00:07:34.648 } 00:07:34.648 ] 00:07:34.648 } 00:07:34.907 [2024-11-19 10:01:48.557108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.907 [2024-11-19 10:01:48.608768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.907 [2024-11-19 10:01:48.663280] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.907  [2024-11-19T10:01:49.055Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:35.167 00:07:35.167 10:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:35.167 10:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:35.167 10:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:35.167 10:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:35.167 10:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:35.167 10:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:35.167 10:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:35.167 10:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:35.735 10:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:35.735 10:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:35.735 10:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:35.735 10:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:35.735 [2024-11-19 10:01:49.586197] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:35.735 { 00:07:35.735 "subsystems": [ 00:07:35.735 { 00:07:35.735 "subsystem": "bdev", 00:07:35.735 "config": [ 00:07:35.735 { 00:07:35.735 "params": { 00:07:35.735 "trtype": "pcie", 00:07:35.735 "traddr": "0000:00:10.0", 00:07:35.735 "name": "Nvme0" 00:07:35.735 }, 00:07:35.735 "method": "bdev_nvme_attach_controller" 00:07:35.735 }, 00:07:35.735 { 00:07:35.735 "method": "bdev_wait_for_examine" 00:07:35.735 } 00:07:35.735 ] 00:07:35.735 } 00:07:35.735 ] 00:07:35.735 } 00:07:35.735 [2024-11-19 10:01:49.586350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59698 ] 00:07:35.995 [2024-11-19 10:01:49.733294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.995 [2024-11-19 10:01:49.781291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.995 [2024-11-19 10:01:49.841493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.255  [2024-11-19T10:01:50.463Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:36.574 00:07:36.574 10:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:36.574 10:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:36.574 10:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:36.574 10:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.574 [2024-11-19 10:01:50.236243] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:36.574 [2024-11-19 10:01:50.236373] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59711 ] 00:07:36.574 { 00:07:36.574 "subsystems": [ 00:07:36.574 { 00:07:36.574 "subsystem": "bdev", 00:07:36.574 "config": [ 00:07:36.574 { 00:07:36.574 "params": { 00:07:36.574 "trtype": "pcie", 00:07:36.574 "traddr": "0000:00:10.0", 00:07:36.574 "name": "Nvme0" 00:07:36.574 }, 00:07:36.574 "method": "bdev_nvme_attach_controller" 00:07:36.574 }, 00:07:36.574 { 00:07:36.574 "method": "bdev_wait_for_examine" 00:07:36.574 } 00:07:36.574 ] 00:07:36.574 } 00:07:36.574 ] 00:07:36.574 } 00:07:36.574 [2024-11-19 10:01:50.385509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.574 [2024-11-19 10:01:50.431552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.847 [2024-11-19 10:01:50.488519] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.847  [2024-11-19T10:01:50.995Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:37.106 00:07:37.106 10:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:37.106 10:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:37.106 10:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:37.106 10:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:37.106 10:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:37.106 10:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:37.106 10:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:37.106 10:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:37.106 10:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:37.106 10:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:37.106 10:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.106 { 00:07:37.106 "subsystems": [ 00:07:37.106 { 00:07:37.106 "subsystem": "bdev", 00:07:37.106 "config": [ 00:07:37.106 { 00:07:37.106 "params": { 00:07:37.106 "trtype": "pcie", 00:07:37.106 "traddr": "0000:00:10.0", 00:07:37.106 "name": "Nvme0" 00:07:37.106 }, 00:07:37.106 "method": "bdev_nvme_attach_controller" 00:07:37.106 }, 00:07:37.106 { 00:07:37.106 "method": "bdev_wait_for_examine" 00:07:37.106 } 00:07:37.106 ] 00:07:37.106 } 00:07:37.106 ] 00:07:37.106 } 00:07:37.106 [2024-11-19 10:01:50.856889] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:37.106 [2024-11-19 10:01:50.857018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59727 ] 00:07:37.366 [2024-11-19 10:01:51.001810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.366 [2024-11-19 10:01:51.050224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.366 [2024-11-19 10:01:51.105566] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.366  [2024-11-19T10:01:51.514Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:37.625 00:07:37.625 10:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:37.625 10:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:37.625 10:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:37.625 10:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:37.625 10:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:37.625 10:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:37.625 10:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.192 10:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:38.192 10:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:38.192 10:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:38.192 10:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.192 [2024-11-19 10:01:51.918276] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:38.192 [2024-11-19 10:01:51.918404] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59746 ] 00:07:38.192 { 00:07:38.192 "subsystems": [ 00:07:38.192 { 00:07:38.192 "subsystem": "bdev", 00:07:38.192 "config": [ 00:07:38.192 { 00:07:38.192 "params": { 00:07:38.192 "trtype": "pcie", 00:07:38.192 "traddr": "0000:00:10.0", 00:07:38.192 "name": "Nvme0" 00:07:38.192 }, 00:07:38.192 "method": "bdev_nvme_attach_controller" 00:07:38.192 }, 00:07:38.192 { 00:07:38.192 "method": "bdev_wait_for_examine" 00:07:38.192 } 00:07:38.192 ] 00:07:38.192 } 00:07:38.192 ] 00:07:38.192 } 00:07:38.192 [2024-11-19 10:01:52.065675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.451 [2024-11-19 10:01:52.115292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.451 [2024-11-19 10:01:52.168826] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.451  [2024-11-19T10:01:52.600Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:38.711 00:07:38.711 10:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:38.711 10:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:38.711 10:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:38.711 10:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.711 [2024-11-19 10:01:52.527330] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:38.711 [2024-11-19 10:01:52.527437] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59765 ] 00:07:38.711 { 00:07:38.711 "subsystems": [ 00:07:38.711 { 00:07:38.711 "subsystem": "bdev", 00:07:38.711 "config": [ 00:07:38.711 { 00:07:38.711 "params": { 00:07:38.711 "trtype": "pcie", 00:07:38.711 "traddr": "0000:00:10.0", 00:07:38.711 "name": "Nvme0" 00:07:38.711 }, 00:07:38.711 "method": "bdev_nvme_attach_controller" 00:07:38.711 }, 00:07:38.711 { 00:07:38.711 "method": "bdev_wait_for_examine" 00:07:38.711 } 00:07:38.711 ] 00:07:38.711 } 00:07:38.711 ] 00:07:38.711 } 00:07:38.970 [2024-11-19 10:01:52.675382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.970 [2024-11-19 10:01:52.721400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.970 [2024-11-19 10:01:52.775658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.229  [2024-11-19T10:01:53.118Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:39.229 00:07:39.229 10:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:39.229 10:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:39.229 10:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:39.229 10:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:39.229 10:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:39.229 10:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:39.229 10:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:39.229 10:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:39.229 10:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:39.229 10:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:39.229 10:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:39.488 { 00:07:39.488 "subsystems": [ 00:07:39.488 { 00:07:39.488 "subsystem": "bdev", 00:07:39.488 "config": [ 00:07:39.488 { 00:07:39.488 "params": { 00:07:39.488 "trtype": "pcie", 00:07:39.488 "traddr": "0000:00:10.0", 00:07:39.488 "name": "Nvme0" 00:07:39.488 }, 00:07:39.488 "method": "bdev_nvme_attach_controller" 00:07:39.488 }, 00:07:39.488 { 00:07:39.488 "method": "bdev_wait_for_examine" 00:07:39.488 } 00:07:39.488 ] 00:07:39.488 } 00:07:39.488 ] 00:07:39.488 } 00:07:39.488 [2024-11-19 10:01:53.146272] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:39.488 [2024-11-19 10:01:53.146375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59780 ] 00:07:39.488 [2024-11-19 10:01:53.293854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.488 [2024-11-19 10:01:53.354022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.747 [2024-11-19 10:01:53.410127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.747  [2024-11-19T10:01:53.895Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:40.006 00:07:40.006 10:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:40.006 10:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:40.006 10:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:40.006 10:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:40.006 10:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:40.006 10:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:40.006 10:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:40.006 10:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:40.265 10:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:40.265 10:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:40.265 10:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:40.265 10:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:40.524 [2024-11-19 10:01:54.162634] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:40.524 [2024-11-19 10:01:54.162735] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59799 ] 00:07:40.524 { 00:07:40.524 "subsystems": [ 00:07:40.524 { 00:07:40.524 "subsystem": "bdev", 00:07:40.524 "config": [ 00:07:40.524 { 00:07:40.524 "params": { 00:07:40.524 "trtype": "pcie", 00:07:40.524 "traddr": "0000:00:10.0", 00:07:40.524 "name": "Nvme0" 00:07:40.524 }, 00:07:40.524 "method": "bdev_nvme_attach_controller" 00:07:40.524 }, 00:07:40.525 { 00:07:40.525 "method": "bdev_wait_for_examine" 00:07:40.525 } 00:07:40.525 ] 00:07:40.525 } 00:07:40.525 ] 00:07:40.525 } 00:07:40.525 [2024-11-19 10:01:54.309903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.525 [2024-11-19 10:01:54.356152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.784 [2024-11-19 10:01:54.415219] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.784  [2024-11-19T10:01:54.933Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:41.044 00:07:41.044 10:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:41.044 10:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:41.044 10:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:41.044 10:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.044 { 00:07:41.044 "subsystems": [ 00:07:41.044 { 00:07:41.044 "subsystem": "bdev", 00:07:41.044 "config": [ 00:07:41.044 { 00:07:41.044 "params": { 00:07:41.044 "trtype": "pcie", 00:07:41.044 "traddr": "0000:00:10.0", 00:07:41.044 "name": "Nvme0" 00:07:41.044 }, 00:07:41.044 "method": "bdev_nvme_attach_controller" 00:07:41.044 }, 00:07:41.044 { 00:07:41.044 "method": "bdev_wait_for_examine" 00:07:41.044 } 00:07:41.044 ] 00:07:41.044 } 00:07:41.044 ] 00:07:41.044 } 00:07:41.044 [2024-11-19 10:01:54.781078] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:41.044 [2024-11-19 10:01:54.781221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59813 ] 00:07:41.044 [2024-11-19 10:01:54.925320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.303 [2024-11-19 10:01:54.973393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.303 [2024-11-19 10:01:55.027941] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.303  [2024-11-19T10:01:55.450Z] Copying: 48/48 [kB] (average 23 MBps) 00:07:41.561 00:07:41.561 10:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:41.561 10:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:41.561 10:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:41.561 10:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:41.561 10:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:41.561 10:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:41.561 10:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:41.561 10:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:41.561 10:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:41.561 10:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:41.561 10:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.561 { 00:07:41.561 "subsystems": [ 00:07:41.561 { 00:07:41.561 "subsystem": "bdev", 00:07:41.561 "config": [ 00:07:41.561 { 00:07:41.561 "params": { 00:07:41.561 "trtype": "pcie", 00:07:41.561 "traddr": "0000:00:10.0", 00:07:41.561 "name": "Nvme0" 00:07:41.561 }, 00:07:41.561 "method": "bdev_nvme_attach_controller" 00:07:41.561 }, 00:07:41.562 { 00:07:41.562 "method": "bdev_wait_for_examine" 00:07:41.562 } 00:07:41.562 ] 00:07:41.562 } 00:07:41.562 ] 00:07:41.562 } 00:07:41.562 [2024-11-19 10:01:55.395217] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:41.562 [2024-11-19 10:01:55.395779] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59834 ] 00:07:41.819 [2024-11-19 10:01:55.540906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.819 [2024-11-19 10:01:55.580564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.819 [2024-11-19 10:01:55.637527] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.076  [2024-11-19T10:01:55.965Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:42.076 00:07:42.076 10:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:42.076 10:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:42.076 10:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:42.076 10:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:42.076 10:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:42.076 10:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:42.076 10:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:42.672 10:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:42.672 10:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:42.672 10:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:42.672 10:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:42.672 [2024-11-19 10:01:56.438899] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:42.672 [2024-11-19 10:01:56.439445] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59853 ] 00:07:42.672 { 00:07:42.672 "subsystems": [ 00:07:42.672 { 00:07:42.672 "subsystem": "bdev", 00:07:42.672 "config": [ 00:07:42.672 { 00:07:42.672 "params": { 00:07:42.672 "trtype": "pcie", 00:07:42.672 "traddr": "0000:00:10.0", 00:07:42.672 "name": "Nvme0" 00:07:42.672 }, 00:07:42.672 "method": "bdev_nvme_attach_controller" 00:07:42.672 }, 00:07:42.672 { 00:07:42.672 "method": "bdev_wait_for_examine" 00:07:42.672 } 00:07:42.672 ] 00:07:42.672 } 00:07:42.672 ] 00:07:42.672 } 00:07:42.930 [2024-11-19 10:01:56.587244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.930 [2024-11-19 10:01:56.645876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.930 [2024-11-19 10:01:56.703232] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.930  [2024-11-19T10:01:57.077Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:43.188 00:07:43.188 10:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:43.188 10:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:43.188 10:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:43.188 10:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:43.188 { 00:07:43.188 "subsystems": [ 00:07:43.188 { 00:07:43.188 "subsystem": "bdev", 00:07:43.188 "config": [ 00:07:43.188 { 00:07:43.188 "params": { 00:07:43.188 "trtype": "pcie", 00:07:43.188 "traddr": "0000:00:10.0", 00:07:43.188 "name": "Nvme0" 00:07:43.188 }, 00:07:43.188 "method": "bdev_nvme_attach_controller" 00:07:43.188 }, 00:07:43.188 { 00:07:43.188 "method": "bdev_wait_for_examine" 00:07:43.188 } 00:07:43.188 ] 00:07:43.188 } 00:07:43.188 ] 00:07:43.188 } 00:07:43.188 [2024-11-19 10:01:57.075204] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:43.188 [2024-11-19 10:01:57.075371] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59861 ] 00:07:43.447 [2024-11-19 10:01:57.221847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.447 [2024-11-19 10:01:57.275716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.447 [2024-11-19 10:01:57.329868] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.706  [2024-11-19T10:01:57.854Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:43.965 00:07:43.965 10:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:43.965 10:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:43.965 10:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:43.965 10:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:43.965 10:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:43.965 10:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:43.965 10:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:43.965 10:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:43.965 10:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:43.965 10:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:43.965 10:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:43.965 { 00:07:43.965 "subsystems": [ 00:07:43.965 { 00:07:43.965 "subsystem": "bdev", 00:07:43.965 "config": [ 00:07:43.965 { 00:07:43.965 "params": { 00:07:43.965 "trtype": "pcie", 00:07:43.965 "traddr": "0000:00:10.0", 00:07:43.965 "name": "Nvme0" 00:07:43.965 }, 00:07:43.965 "method": "bdev_nvme_attach_controller" 00:07:43.965 }, 00:07:43.965 { 00:07:43.965 "method": "bdev_wait_for_examine" 00:07:43.965 } 00:07:43.965 ] 00:07:43.965 } 00:07:43.965 ] 00:07:43.965 } 00:07:43.965 [2024-11-19 10:01:57.703754] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:43.965 [2024-11-19 10:01:57.703974] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59882 ] 00:07:44.224 [2024-11-19 10:01:57.855405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.224 [2024-11-19 10:01:57.915392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.224 [2024-11-19 10:01:57.972454] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.224  [2024-11-19T10:01:58.372Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:44.483 00:07:44.483 00:07:44.483 real 0m14.116s 00:07:44.483 user 0m10.225s 00:07:44.483 sys 0m5.465s 00:07:44.483 10:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.483 10:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:44.483 ************************************ 00:07:44.483 END TEST dd_rw 00:07:44.483 ************************************ 00:07:44.483 10:01:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:44.483 10:01:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.483 10:01:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.483 10:01:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:44.483 ************************************ 00:07:44.483 START TEST dd_rw_offset 00:07:44.483 ************************************ 00:07:44.483 10:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:07:44.483 10:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:44.483 10:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:44.483 10:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:44.483 10:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:44.740 10:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:44.740 10:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=kksffms7zjkszp11gmjqnikqxm5lro129v86lxz6lkfe31e0sq83iro7r5wm8xf4ekavoh08ifudv8od8tk9btdrbsv82yegxaxwhj9mrll9vwkn1b3p4ke3gxk39vhnbiak6vk0gx9du6g7n2heh3slr98pn3ks87e1xkznflzy32k3g5iy7bglt78llwsln7lfzxaqjxjjvi0xb25r3s55khlgf1m720p3m1rq7ezec8bvkg2jgep1zth63jk5riw8o7k6xz2f65rkie4zddlrr0grv3khc8269q3xdw1r39r0hl3poscut2q67by1i7p1sc1w9t3mq8et1p8c4ts23r4fqph68j32r21h0v3001q7tb2e6158xe0ogit8mbr0287sdu58dck5qjwua5mwiwgovftoqcya06hu79sfu0ke7pnxleazb5uh62invzdkkg04icxmozjo6hm8zrkwk1unbkvgb9m7o7ws0ew42ie1je93i8lwsfxlt6fma1u8lsaug6cfyhnidysjstbtnmni12sd0t5cqk14dx7jhfx45lf2z0fagiamqfwgyvaf563km4j94cljjut7rrc50g8j49bj729f5yjn8af9dsjamhx3ot22zvid261z4b9xjo8bhbt4qn4qlyq6j1ot7m4cld48b3y4zhrbezeuo1tu1m31f2hffsj8dlpejcapkd76cbdzkl0xjh1q4ahjqae8q8e8kzthivri5hc8wx29rx08rc1ol0pu4zkaq5auwunpk6bdj9369gpt8u5ldra3qm316kdxho5p75m89x5cdoybtt56hcdml440znest20jql6adva4zw2qa77foen0gsbpx3hmo4994l5co7bx3l9kwc4rspvs0n23ht8hri8mzhos5ywfo8qjwoa80qmckrca0x9sguk3ptwhxginxq2torr635cawdlk0veet5nfvfzor5dkmby06u5bay2tp31vaxxbncmqjmwm0lugexo0i7sa585excpeg7qxp8frgwgw76t1j7ixkljz7rejqmk76cffybqparvzrwr826txoa705iqqvf2mkj6e0jfnx6fwwkynmcisoxxiilgcgc9dxzx8l1yr3qhba8dvcqju34sp6sr3v8hcwxij9mtd8axbf8vlwpjt62jyqw8x0pz6htirr4u27buftv07wssam766ql4zsqaqid8hnubv08rz51azcy9pkl6w14hd74pjupmka0qz0x4gzozmdr829ldzfhbbkq9kubplxpb4q4q7p2uhgaqdm5zvd5qrmwv9m6z8hxcqpq51016xkawow30fi98ghp6pxo69toppk52cas3tle3qhnfloe91u5yhystzw3anjjhqq7goaizsmcro54qxmxxzj21gm5ldtuf78y5yccfo6ygpmhi9ja6r7ak2lrjl1hgy1yb3vmlfhl6z2yg2c7fltmshuz47lhcc5y2y860hfwiqotyvkte3vg7n0f4jxxjbg98cg9fzq3bexuw0k0c0022eyzq28bel4sezq3zsk6adcfco087a6b1u6gia6zly7armdlm9pammw8bded2kraktp1ynz99or9cqtic687r2nymm0t3ponti9kdqgg8nm258bh5ynjvef9ri0yioefzysej418wheip5ii5swq7j93pnruer51k6kl58p621kgh7jvgn86n8d6jb62tkt2q4d3og5yf18e264jlqvs6o0v4vytqlxiwf59gthuscn944dmezdojw4ij087o86ep3y270dhmp47ocsfhx49ob8atgoisjld04jlllqzj0royt4cpkuox2pz5ijqecj8ezsvbikm8uyt9syyop5xj822x36e7tasuoyt487ep4kjobg7bie3chp9ouddjt44estskmpw42ggjybfqb3idzwyq9ines9q6mgov19q6tuf9ujn9llxvqaeiixn1lbqwzjyhzylenkmxs519vswtgdis2p9j7a9s0pdoeehkcz3hbir54iazdabjiiin6b8gjdona7v830t08hiyynk5zinyseo6s6rab4i6ne0taku9oevgf52vete6a66ofh849c2afylh72sk1oc97jdfyoskk6cizqbq3vik3b70qe5qvasbfk14w1ajv5h8n157fmsngb0e07nvfqay71h1nszs3roio4ibj40v9iiubhlcvi5d3st8t28rq4sllurk57fwnu1yi0wm06g0eotw8uja9ms4vi5enbsgnok6iet2gilhgoelyfp7wywewsgu8kn9s5o5e9lc9a80hg50qd8kgvc2t7sb3hkswj0udmjuvlwlhjjfkcm8y9xva2ohx3deyq0yz11yveiqll5liqrtm2j9070nkgf8st9yn01s98kkgnvgr84lnxpso0t45aegwmbcyz15bsr6yoe9cwjewr091mqgpa5vlknjnnko9vdhrv1q2pmoqlbcu596rzi8bh334ye9rmugqzr6rplcg7q2q6q1ngfd24bysri2hg59qq60merucy6fkujq6vjwxl0ubjjlp25q6d4h1zizh9qt138psv85ov9ooadp8bkfy75e029rela7zm99pmf28x3f1x6pcm930hv1qcxgr1w4iqr40xsjx1vu3jyxogl2oe8miwrufazymwtyg2ye3urwp6iizonvbeupxgi8lkevr9ron39i6sbw34zruu3wqarwpj9r5zzwdd2el3astxdj4gevzc1opihpd0wlr7anu157hnijrvry2w6c7l8unpna7roqwzvfi8tyear9h6rl3nk4idwb5437dv6puzwg7bhc7epzrfwr6ucrlp4bmgw5z9psh2uucm84uq462qkce7mnbkdd1893yb7r1bpk0k7bvtjufplc0u5r16inefx8q0jt2cgm44rr4j8va88l9b96eevk7obe6kx8h2zcatbhmm1f1dzanyz924u83sx2sc9gqth90vwkpovg4gk59to3g97ocvh6bys4pzx16hrk7ok2rvysakj5a4td3fec8em72g1qgr8841hcfoj1p8ecwrfnblfh4hcuh8iegdfvhzduwa4ls7h4sc76ax47f7mthvim5c9ido06b4npci14tvaihrpl26knrw6ubc5dyf2yd4s0fvbis446j1b6wp969y9q4pypfrht9npxzclvxnksme4tlmywkfm2e0pop6dzj62iadh7np3cpl1zm5024inlsb8c9drjoklsvk0rikp8uly097v99nrtbiwuqhkae0gl7pmt4ahg01az1pnq52tovd3pftpwxfnrlg8o2h95vvfa1l04ed80j9urrrw8kzhbyt3tc0fbet9vdfrktp57m28wpl1cnj3glle7bytg7eozxgflyym4ynt5wr2kvktye5v04lnsm3ievkvdjee4ap5kb8a7ib5u1zc0q92ytmvoru66kbd62le4sduuq116rjl1ll2vegeqqe4r1ulgintmz8v85u96q3zqz2v3tw8rxrojiazvtae7odwyuyqbbbr5pe4fefan3fypg1lkj35byxg4q1eubi3k6uu8v9454811wgd4net1vwunjp0uujtzy71s8ornd0s9nbz2y5z3ml0j4taxoh3mwlxof2giqd4zxzkee39w63voedgs0u28eisru63297u6e6q5wx14jit83zondbrap5ecylz6e51ygwrtx6w1zskprr1u091zpp0e2e4ors9fozwhlg8hnjkstgs3p9vwphwc7kmujrqsl5o2iuagxazvivv6hrab9exh50987volwgmvi7120fbsvm3yjcc9g82w089e3pc6t3gjcx5x334aquzciey7ugvbqcszcgphndza44sy5l9tie59onklm3zrztp6lygzsx13dr9rpkp4rsjehj3cxm5pqma5yx1yko4m2vhv5u29sms7y7ovt12zyiqgxinp06ajz2oylj67gs8fifyjqhei03h5kbcfdda18kwwl86se64ke4qg7v5ic6vl0y2ad1ke96y3j9jhxrwacumeotfshvp1gnf59h6pn 00:07:44.740 10:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:44.740 10:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:44.740 10:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:44.740 10:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:44.740 { 00:07:44.740 "subsystems": [ 00:07:44.740 { 00:07:44.740 "subsystem": "bdev", 00:07:44.740 "config": [ 00:07:44.740 { 00:07:44.740 "params": { 00:07:44.740 "trtype": "pcie", 00:07:44.740 "traddr": "0000:00:10.0", 00:07:44.740 "name": "Nvme0" 00:07:44.740 }, 00:07:44.740 "method": "bdev_nvme_attach_controller" 00:07:44.740 }, 00:07:44.740 { 00:07:44.740 "method": "bdev_wait_for_examine" 00:07:44.740 } 00:07:44.740 ] 00:07:44.740 } 00:07:44.740 ] 00:07:44.740 } 00:07:44.740 [2024-11-19 10:01:58.461568] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:44.740 [2024-11-19 10:01:58.461663] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59918 ] 00:07:44.740 [2024-11-19 10:01:58.607367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.000 [2024-11-19 10:01:58.669222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.000 [2024-11-19 10:01:58.724625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.000  [2024-11-19T10:01:59.148Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:45.259 00:07:45.259 10:01:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:45.259 10:01:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:45.259 10:01:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:45.259 10:01:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:45.259 { 00:07:45.259 "subsystems": [ 00:07:45.259 { 00:07:45.259 "subsystem": "bdev", 00:07:45.259 "config": [ 00:07:45.259 { 00:07:45.259 "params": { 00:07:45.259 "trtype": "pcie", 00:07:45.259 "traddr": "0000:00:10.0", 00:07:45.259 "name": "Nvme0" 00:07:45.259 }, 00:07:45.259 "method": "bdev_nvme_attach_controller" 00:07:45.259 }, 00:07:45.259 { 00:07:45.259 "method": "bdev_wait_for_examine" 00:07:45.259 } 00:07:45.259 ] 00:07:45.259 } 00:07:45.259 ] 00:07:45.259 } 00:07:45.259 [2024-11-19 10:01:59.086409] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:45.259 [2024-11-19 10:01:59.086537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59926 ] 00:07:45.517 [2024-11-19 10:01:59.232933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.517 [2024-11-19 10:01:59.296182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.517 [2024-11-19 10:01:59.352317] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.775  [2024-11-19T10:01:59.664Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:45.775 00:07:45.775 10:01:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:46.035 10:01:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ kksffms7zjkszp11gmjqnikqxm5lro129v86lxz6lkfe31e0sq83iro7r5wm8xf4ekavoh08ifudv8od8tk9btdrbsv82yegxaxwhj9mrll9vwkn1b3p4ke3gxk39vhnbiak6vk0gx9du6g7n2heh3slr98pn3ks87e1xkznflzy32k3g5iy7bglt78llwsln7lfzxaqjxjjvi0xb25r3s55khlgf1m720p3m1rq7ezec8bvkg2jgep1zth63jk5riw8o7k6xz2f65rkie4zddlrr0grv3khc8269q3xdw1r39r0hl3poscut2q67by1i7p1sc1w9t3mq8et1p8c4ts23r4fqph68j32r21h0v3001q7tb2e6158xe0ogit8mbr0287sdu58dck5qjwua5mwiwgovftoqcya06hu79sfu0ke7pnxleazb5uh62invzdkkg04icxmozjo6hm8zrkwk1unbkvgb9m7o7ws0ew42ie1je93i8lwsfxlt6fma1u8lsaug6cfyhnidysjstbtnmni12sd0t5cqk14dx7jhfx45lf2z0fagiamqfwgyvaf563km4j94cljjut7rrc50g8j49bj729f5yjn8af9dsjamhx3ot22zvid261z4b9xjo8bhbt4qn4qlyq6j1ot7m4cld48b3y4zhrbezeuo1tu1m31f2hffsj8dlpejcapkd76cbdzkl0xjh1q4ahjqae8q8e8kzthivri5hc8wx29rx08rc1ol0pu4zkaq5auwunpk6bdj9369gpt8u5ldra3qm316kdxho5p75m89x5cdoybtt56hcdml440znest20jql6adva4zw2qa77foen0gsbpx3hmo4994l5co7bx3l9kwc4rspvs0n23ht8hri8mzhos5ywfo8qjwoa80qmckrca0x9sguk3ptwhxginxq2torr635cawdlk0veet5nfvfzor5dkmby06u5bay2tp31vaxxbncmqjmwm0lugexo0i7sa585excpeg7qxp8frgwgw76t1j7ixkljz7rejqmk76cffybqparvzrwr826txoa705iqqvf2mkj6e0jfnx6fwwkynmcisoxxiilgcgc9dxzx8l1yr3qhba8dvcqju34sp6sr3v8hcwxij9mtd8axbf8vlwpjt62jyqw8x0pz6htirr4u27buftv07wssam766ql4zsqaqid8hnubv08rz51azcy9pkl6w14hd74pjupmka0qz0x4gzozmdr829ldzfhbbkq9kubplxpb4q4q7p2uhgaqdm5zvd5qrmwv9m6z8hxcqpq51016xkawow30fi98ghp6pxo69toppk52cas3tle3qhnfloe91u5yhystzw3anjjhqq7goaizsmcro54qxmxxzj21gm5ldtuf78y5yccfo6ygpmhi9ja6r7ak2lrjl1hgy1yb3vmlfhl6z2yg2c7fltmshuz47lhcc5y2y860hfwiqotyvkte3vg7n0f4jxxjbg98cg9fzq3bexuw0k0c0022eyzq28bel4sezq3zsk6adcfco087a6b1u6gia6zly7armdlm9pammw8bded2kraktp1ynz99or9cqtic687r2nymm0t3ponti9kdqgg8nm258bh5ynjvef9ri0yioefzysej418wheip5ii5swq7j93pnruer51k6kl58p621kgh7jvgn86n8d6jb62tkt2q4d3og5yf18e264jlqvs6o0v4vytqlxiwf59gthuscn944dmezdojw4ij087o86ep3y270dhmp47ocsfhx49ob8atgoisjld04jlllqzj0royt4cpkuox2pz5ijqecj8ezsvbikm8uyt9syyop5xj822x36e7tasuoyt487ep4kjobg7bie3chp9ouddjt44estskmpw42ggjybfqb3idzwyq9ines9q6mgov19q6tuf9ujn9llxvqaeiixn1lbqwzjyhzylenkmxs519vswtgdis2p9j7a9s0pdoeehkcz3hbir54iazdabjiiin6b8gjdona7v830t08hiyynk5zinyseo6s6rab4i6ne0taku9oevgf52vete6a66ofh849c2afylh72sk1oc97jdfyoskk6cizqbq3vik3b70qe5qvasbfk14w1ajv5h8n157fmsngb0e07nvfqay71h1nszs3roio4ibj40v9iiubhlcvi5d3st8t28rq4sllurk57fwnu1yi0wm06g0eotw8uja9ms4vi5enbsgnok6iet2gilhgoelyfp7wywewsgu8kn9s5o5e9lc9a80hg50qd8kgvc2t7sb3hkswj0udmjuvlwlhjjfkcm8y9xva2ohx3deyq0yz11yveiqll5liqrtm2j9070nkgf8st9yn01s98kkgnvgr84lnxpso0t45aegwmbcyz15bsr6yoe9cwjewr091mqgpa5vlknjnnko9vdhrv1q2pmoqlbcu596rzi8bh334ye9rmugqzr6rplcg7q2q6q1ngfd24bysri2hg59qq60merucy6fkujq6vjwxl0ubjjlp25q6d4h1zizh9qt138psv85ov9ooadp8bkfy75e029rela7zm99pmf28x3f1x6pcm930hv1qcxgr1w4iqr40xsjx1vu3jyxogl2oe8miwrufazymwtyg2ye3urwp6iizonvbeupxgi8lkevr9ron39i6sbw34zruu3wqarwpj9r5zzwdd2el3astxdj4gevzc1opihpd0wlr7anu157hnijrvry2w6c7l8unpna7roqwzvfi8tyear9h6rl3nk4idwb5437dv6puzwg7bhc7epzrfwr6ucrlp4bmgw5z9psh2uucm84uq462qkce7mnbkdd1893yb7r1bpk0k7bvtjufplc0u5r16inefx8q0jt2cgm44rr4j8va88l9b96eevk7obe6kx8h2zcatbhmm1f1dzanyz924u83sx2sc9gqth90vwkpovg4gk59to3g97ocvh6bys4pzx16hrk7ok2rvysakj5a4td3fec8em72g1qgr8841hcfoj1p8ecwrfnblfh4hcuh8iegdfvhzduwa4ls7h4sc76ax47f7mthvim5c9ido06b4npci14tvaihrpl26knrw6ubc5dyf2yd4s0fvbis446j1b6wp969y9q4pypfrht9npxzclvxnksme4tlmywkfm2e0pop6dzj62iadh7np3cpl1zm5024inlsb8c9drjoklsvk0rikp8uly097v99nrtbiwuqhkae0gl7pmt4ahg01az1pnq52tovd3pftpwxfnrlg8o2h95vvfa1l04ed80j9urrrw8kzhbyt3tc0fbet9vdfrktp57m28wpl1cnj3glle7bytg7eozxgflyym4ynt5wr2kvktye5v04lnsm3ievkvdjee4ap5kb8a7ib5u1zc0q92ytmvoru66kbd62le4sduuq116rjl1ll2vegeqqe4r1ulgintmz8v85u96q3zqz2v3tw8rxrojiazvtae7odwyuyqbbbr5pe4fefan3fypg1lkj35byxg4q1eubi3k6uu8v9454811wgd4net1vwunjp0uujtzy71s8ornd0s9nbz2y5z3ml0j4taxoh3mwlxof2giqd4zxzkee39w63voedgs0u28eisru63297u6e6q5wx14jit83zondbrap5ecylz6e51ygwrtx6w1zskprr1u091zpp0e2e4ors9fozwhlg8hnjkstgs3p9vwphwc7kmujrqsl5o2iuagxazvivv6hrab9exh50987volwgmvi7120fbsvm3yjcc9g82w089e3pc6t3gjcx5x334aquzciey7ugvbqcszcgphndza44sy5l9tie59onklm3zrztp6lygzsx13dr9rpkp4rsjehj3cxm5pqma5yx1yko4m2vhv5u29sms7y7ovt12zyiqgxinp06ajz2oylj67gs8fifyjqhei03h5kbcfdda18kwwl86se64ke4qg7v5ic6vl0y2ad1ke96y3j9jhxrwacumeotfshvp1gnf59h6pn == \k\k\s\f\f\m\s\7\z\j\k\s\z\p\1\1\g\m\j\q\n\i\k\q\x\m\5\l\r\o\1\2\9\v\8\6\l\x\z\6\l\k\f\e\3\1\e\0\s\q\8\3\i\r\o\7\r\5\w\m\8\x\f\4\e\k\a\v\o\h\0\8\i\f\u\d\v\8\o\d\8\t\k\9\b\t\d\r\b\s\v\8\2\y\e\g\x\a\x\w\h\j\9\m\r\l\l\9\v\w\k\n\1\b\3\p\4\k\e\3\g\x\k\3\9\v\h\n\b\i\a\k\6\v\k\0\g\x\9\d\u\6\g\7\n\2\h\e\h\3\s\l\r\9\8\p\n\3\k\s\8\7\e\1\x\k\z\n\f\l\z\y\3\2\k\3\g\5\i\y\7\b\g\l\t\7\8\l\l\w\s\l\n\7\l\f\z\x\a\q\j\x\j\j\v\i\0\x\b\2\5\r\3\s\5\5\k\h\l\g\f\1\m\7\2\0\p\3\m\1\r\q\7\e\z\e\c\8\b\v\k\g\2\j\g\e\p\1\z\t\h\6\3\j\k\5\r\i\w\8\o\7\k\6\x\z\2\f\6\5\r\k\i\e\4\z\d\d\l\r\r\0\g\r\v\3\k\h\c\8\2\6\9\q\3\x\d\w\1\r\3\9\r\0\h\l\3\p\o\s\c\u\t\2\q\6\7\b\y\1\i\7\p\1\s\c\1\w\9\t\3\m\q\8\e\t\1\p\8\c\4\t\s\2\3\r\4\f\q\p\h\6\8\j\3\2\r\2\1\h\0\v\3\0\0\1\q\7\t\b\2\e\6\1\5\8\x\e\0\o\g\i\t\8\m\b\r\0\2\8\7\s\d\u\5\8\d\c\k\5\q\j\w\u\a\5\m\w\i\w\g\o\v\f\t\o\q\c\y\a\0\6\h\u\7\9\s\f\u\0\k\e\7\p\n\x\l\e\a\z\b\5\u\h\6\2\i\n\v\z\d\k\k\g\0\4\i\c\x\m\o\z\j\o\6\h\m\8\z\r\k\w\k\1\u\n\b\k\v\g\b\9\m\7\o\7\w\s\0\e\w\4\2\i\e\1\j\e\9\3\i\8\l\w\s\f\x\l\t\6\f\m\a\1\u\8\l\s\a\u\g\6\c\f\y\h\n\i\d\y\s\j\s\t\b\t\n\m\n\i\1\2\s\d\0\t\5\c\q\k\1\4\d\x\7\j\h\f\x\4\5\l\f\2\z\0\f\a\g\i\a\m\q\f\w\g\y\v\a\f\5\6\3\k\m\4\j\9\4\c\l\j\j\u\t\7\r\r\c\5\0\g\8\j\4\9\b\j\7\2\9\f\5\y\j\n\8\a\f\9\d\s\j\a\m\h\x\3\o\t\2\2\z\v\i\d\2\6\1\z\4\b\9\x\j\o\8\b\h\b\t\4\q\n\4\q\l\y\q\6\j\1\o\t\7\m\4\c\l\d\4\8\b\3\y\4\z\h\r\b\e\z\e\u\o\1\t\u\1\m\3\1\f\2\h\f\f\s\j\8\d\l\p\e\j\c\a\p\k\d\7\6\c\b\d\z\k\l\0\x\j\h\1\q\4\a\h\j\q\a\e\8\q\8\e\8\k\z\t\h\i\v\r\i\5\h\c\8\w\x\2\9\r\x\0\8\r\c\1\o\l\0\p\u\4\z\k\a\q\5\a\u\w\u\n\p\k\6\b\d\j\9\3\6\9\g\p\t\8\u\5\l\d\r\a\3\q\m\3\1\6\k\d\x\h\o\5\p\7\5\m\8\9\x\5\c\d\o\y\b\t\t\5\6\h\c\d\m\l\4\4\0\z\n\e\s\t\2\0\j\q\l\6\a\d\v\a\4\z\w\2\q\a\7\7\f\o\e\n\0\g\s\b\p\x\3\h\m\o\4\9\9\4\l\5\c\o\7\b\x\3\l\9\k\w\c\4\r\s\p\v\s\0\n\2\3\h\t\8\h\r\i\8\m\z\h\o\s\5\y\w\f\o\8\q\j\w\o\a\8\0\q\m\c\k\r\c\a\0\x\9\s\g\u\k\3\p\t\w\h\x\g\i\n\x\q\2\t\o\r\r\6\3\5\c\a\w\d\l\k\0\v\e\e\t\5\n\f\v\f\z\o\r\5\d\k\m\b\y\0\6\u\5\b\a\y\2\t\p\3\1\v\a\x\x\b\n\c\m\q\j\m\w\m\0\l\u\g\e\x\o\0\i\7\s\a\5\8\5\e\x\c\p\e\g\7\q\x\p\8\f\r\g\w\g\w\7\6\t\1\j\7\i\x\k\l\j\z\7\r\e\j\q\m\k\7\6\c\f\f\y\b\q\p\a\r\v\z\r\w\r\8\2\6\t\x\o\a\7\0\5\i\q\q\v\f\2\m\k\j\6\e\0\j\f\n\x\6\f\w\w\k\y\n\m\c\i\s\o\x\x\i\i\l\g\c\g\c\9\d\x\z\x\8\l\1\y\r\3\q\h\b\a\8\d\v\c\q\j\u\3\4\s\p\6\s\r\3\v\8\h\c\w\x\i\j\9\m\t\d\8\a\x\b\f\8\v\l\w\p\j\t\6\2\j\y\q\w\8\x\0\p\z\6\h\t\i\r\r\4\u\2\7\b\u\f\t\v\0\7\w\s\s\a\m\7\6\6\q\l\4\z\s\q\a\q\i\d\8\h\n\u\b\v\0\8\r\z\5\1\a\z\c\y\9\p\k\l\6\w\1\4\h\d\7\4\p\j\u\p\m\k\a\0\q\z\0\x\4\g\z\o\z\m\d\r\8\2\9\l\d\z\f\h\b\b\k\q\9\k\u\b\p\l\x\p\b\4\q\4\q\7\p\2\u\h\g\a\q\d\m\5\z\v\d\5\q\r\m\w\v\9\m\6\z\8\h\x\c\q\p\q\5\1\0\1\6\x\k\a\w\o\w\3\0\f\i\9\8\g\h\p\6\p\x\o\6\9\t\o\p\p\k\5\2\c\a\s\3\t\l\e\3\q\h\n\f\l\o\e\9\1\u\5\y\h\y\s\t\z\w\3\a\n\j\j\h\q\q\7\g\o\a\i\z\s\m\c\r\o\5\4\q\x\m\x\x\z\j\2\1\g\m\5\l\d\t\u\f\7\8\y\5\y\c\c\f\o\6\y\g\p\m\h\i\9\j\a\6\r\7\a\k\2\l\r\j\l\1\h\g\y\1\y\b\3\v\m\l\f\h\l\6\z\2\y\g\2\c\7\f\l\t\m\s\h\u\z\4\7\l\h\c\c\5\y\2\y\8\6\0\h\f\w\i\q\o\t\y\v\k\t\e\3\v\g\7\n\0\f\4\j\x\x\j\b\g\9\8\c\g\9\f\z\q\3\b\e\x\u\w\0\k\0\c\0\0\2\2\e\y\z\q\2\8\b\e\l\4\s\e\z\q\3\z\s\k\6\a\d\c\f\c\o\0\8\7\a\6\b\1\u\6\g\i\a\6\z\l\y\7\a\r\m\d\l\m\9\p\a\m\m\w\8\b\d\e\d\2\k\r\a\k\t\p\1\y\n\z\9\9\o\r\9\c\q\t\i\c\6\8\7\r\2\n\y\m\m\0\t\3\p\o\n\t\i\9\k\d\q\g\g\8\n\m\2\5\8\b\h\5\y\n\j\v\e\f\9\r\i\0\y\i\o\e\f\z\y\s\e\j\4\1\8\w\h\e\i\p\5\i\i\5\s\w\q\7\j\9\3\p\n\r\u\e\r\5\1\k\6\k\l\5\8\p\6\2\1\k\g\h\7\j\v\g\n\8\6\n\8\d\6\j\b\6\2\t\k\t\2\q\4\d\3\o\g\5\y\f\1\8\e\2\6\4\j\l\q\v\s\6\o\0\v\4\v\y\t\q\l\x\i\w\f\5\9\g\t\h\u\s\c\n\9\4\4\d\m\e\z\d\o\j\w\4\i\j\0\8\7\o\8\6\e\p\3\y\2\7\0\d\h\m\p\4\7\o\c\s\f\h\x\4\9\o\b\8\a\t\g\o\i\s\j\l\d\0\4\j\l\l\l\q\z\j\0\r\o\y\t\4\c\p\k\u\o\x\2\p\z\5\i\j\q\e\c\j\8\e\z\s\v\b\i\k\m\8\u\y\t\9\s\y\y\o\p\5\x\j\8\2\2\x\3\6\e\7\t\a\s\u\o\y\t\4\8\7\e\p\4\k\j\o\b\g\7\b\i\e\3\c\h\p\9\o\u\d\d\j\t\4\4\e\s\t\s\k\m\p\w\4\2\g\g\j\y\b\f\q\b\3\i\d\z\w\y\q\9\i\n\e\s\9\q\6\m\g\o\v\1\9\q\6\t\u\f\9\u\j\n\9\l\l\x\v\q\a\e\i\i\x\n\1\l\b\q\w\z\j\y\h\z\y\l\e\n\k\m\x\s\5\1\9\v\s\w\t\g\d\i\s\2\p\9\j\7\a\9\s\0\p\d\o\e\e\h\k\c\z\3\h\b\i\r\5\4\i\a\z\d\a\b\j\i\i\i\n\6\b\8\g\j\d\o\n\a\7\v\8\3\0\t\0\8\h\i\y\y\n\k\5\z\i\n\y\s\e\o\6\s\6\r\a\b\4\i\6\n\e\0\t\a\k\u\9\o\e\v\g\f\5\2\v\e\t\e\6\a\6\6\o\f\h\8\4\9\c\2\a\f\y\l\h\7\2\s\k\1\o\c\9\7\j\d\f\y\o\s\k\k\6\c\i\z\q\b\q\3\v\i\k\3\b\7\0\q\e\5\q\v\a\s\b\f\k\1\4\w\1\a\j\v\5\h\8\n\1\5\7\f\m\s\n\g\b\0\e\0\7\n\v\f\q\a\y\7\1\h\1\n\s\z\s\3\r\o\i\o\4\i\b\j\4\0\v\9\i\i\u\b\h\l\c\v\i\5\d\3\s\t\8\t\2\8\r\q\4\s\l\l\u\r\k\5\7\f\w\n\u\1\y\i\0\w\m\0\6\g\0\e\o\t\w\8\u\j\a\9\m\s\4\v\i\5\e\n\b\s\g\n\o\k\6\i\e\t\2\g\i\l\h\g\o\e\l\y\f\p\7\w\y\w\e\w\s\g\u\8\k\n\9\s\5\o\5\e\9\l\c\9\a\8\0\h\g\5\0\q\d\8\k\g\v\c\2\t\7\s\b\3\h\k\s\w\j\0\u\d\m\j\u\v\l\w\l\h\j\j\f\k\c\m\8\y\9\x\v\a\2\o\h\x\3\d\e\y\q\0\y\z\1\1\y\v\e\i\q\l\l\5\l\i\q\r\t\m\2\j\9\0\7\0\n\k\g\f\8\s\t\9\y\n\0\1\s\9\8\k\k\g\n\v\g\r\8\4\l\n\x\p\s\o\0\t\4\5\a\e\g\w\m\b\c\y\z\1\5\b\s\r\6\y\o\e\9\c\w\j\e\w\r\0\9\1\m\q\g\p\a\5\v\l\k\n\j\n\n\k\o\9\v\d\h\r\v\1\q\2\p\m\o\q\l\b\c\u\5\9\6\r\z\i\8\b\h\3\3\4\y\e\9\r\m\u\g\q\z\r\6\r\p\l\c\g\7\q\2\q\6\q\1\n\g\f\d\2\4\b\y\s\r\i\2\h\g\5\9\q\q\6\0\m\e\r\u\c\y\6\f\k\u\j\q\6\v\j\w\x\l\0\u\b\j\j\l\p\2\5\q\6\d\4\h\1\z\i\z\h\9\q\t\1\3\8\p\s\v\8\5\o\v\9\o\o\a\d\p\8\b\k\f\y\7\5\e\0\2\9\r\e\l\a\7\z\m\9\9\p\m\f\2\8\x\3\f\1\x\6\p\c\m\9\3\0\h\v\1\q\c\x\g\r\1\w\4\i\q\r\4\0\x\s\j\x\1\v\u\3\j\y\x\o\g\l\2\o\e\8\m\i\w\r\u\f\a\z\y\m\w\t\y\g\2\y\e\3\u\r\w\p\6\i\i\z\o\n\v\b\e\u\p\x\g\i\8\l\k\e\v\r\9\r\o\n\3\9\i\6\s\b\w\3\4\z\r\u\u\3\w\q\a\r\w\p\j\9\r\5\z\z\w\d\d\2\e\l\3\a\s\t\x\d\j\4\g\e\v\z\c\1\o\p\i\h\p\d\0\w\l\r\7\a\n\u\1\5\7\h\n\i\j\r\v\r\y\2\w\6\c\7\l\8\u\n\p\n\a\7\r\o\q\w\z\v\f\i\8\t\y\e\a\r\9\h\6\r\l\3\n\k\4\i\d\w\b\5\4\3\7\d\v\6\p\u\z\w\g\7\b\h\c\7\e\p\z\r\f\w\r\6\u\c\r\l\p\4\b\m\g\w\5\z\9\p\s\h\2\u\u\c\m\8\4\u\q\4\6\2\q\k\c\e\7\m\n\b\k\d\d\1\8\9\3\y\b\7\r\1\b\p\k\0\k\7\b\v\t\j\u\f\p\l\c\0\u\5\r\1\6\i\n\e\f\x\8\q\0\j\t\2\c\g\m\4\4\r\r\4\j\8\v\a\8\8\l\9\b\9\6\e\e\v\k\7\o\b\e\6\k\x\8\h\2\z\c\a\t\b\h\m\m\1\f\1\d\z\a\n\y\z\9\2\4\u\8\3\s\x\2\s\c\9\g\q\t\h\9\0\v\w\k\p\o\v\g\4\g\k\5\9\t\o\3\g\9\7\o\c\v\h\6\b\y\s\4\p\z\x\1\6\h\r\k\7\o\k\2\r\v\y\s\a\k\j\5\a\4\t\d\3\f\e\c\8\e\m\7\2\g\1\q\g\r\8\8\4\1\h\c\f\o\j\1\p\8\e\c\w\r\f\n\b\l\f\h\4\h\c\u\h\8\i\e\g\d\f\v\h\z\d\u\w\a\4\l\s\7\h\4\s\c\7\6\a\x\4\7\f\7\m\t\h\v\i\m\5\c\9\i\d\o\0\6\b\4\n\p\c\i\1\4\t\v\a\i\h\r\p\l\2\6\k\n\r\w\6\u\b\c\5\d\y\f\2\y\d\4\s\0\f\v\b\i\s\4\4\6\j\1\b\6\w\p\9\6\9\y\9\q\4\p\y\p\f\r\h\t\9\n\p\x\z\c\l\v\x\n\k\s\m\e\4\t\l\m\y\w\k\f\m\2\e\0\p\o\p\6\d\z\j\6\2\i\a\d\h\7\n\p\3\c\p\l\1\z\m\5\0\2\4\i\n\l\s\b\8\c\9\d\r\j\o\k\l\s\v\k\0\r\i\k\p\8\u\l\y\0\9\7\v\9\9\n\r\t\b\i\w\u\q\h\k\a\e\0\g\l\7\p\m\t\4\a\h\g\0\1\a\z\1\p\n\q\5\2\t\o\v\d\3\p\f\t\p\w\x\f\n\r\l\g\8\o\2\h\9\5\v\v\f\a\1\l\0\4\e\d\8\0\j\9\u\r\r\r\w\8\k\z\h\b\y\t\3\t\c\0\f\b\e\t\9\v\d\f\r\k\t\p\5\7\m\2\8\w\p\l\1\c\n\j\3\g\l\l\e\7\b\y\t\g\7\e\o\z\x\g\f\l\y\y\m\4\y\n\t\5\w\r\2\k\v\k\t\y\e\5\v\0\4\l\n\s\m\3\i\e\v\k\v\d\j\e\e\4\a\p\5\k\b\8\a\7\i\b\5\u\1\z\c\0\q\9\2\y\t\m\v\o\r\u\6\6\k\b\d\6\2\l\e\4\s\d\u\u\q\1\1\6\r\j\l\1\l\l\2\v\e\g\e\q\q\e\4\r\1\u\l\g\i\n\t\m\z\8\v\8\5\u\9\6\q\3\z\q\z\2\v\3\t\w\8\r\x\r\o\j\i\a\z\v\t\a\e\7\o\d\w\y\u\y\q\b\b\b\r\5\p\e\4\f\e\f\a\n\3\f\y\p\g\1\l\k\j\3\5\b\y\x\g\4\q\1\e\u\b\i\3\k\6\u\u\8\v\9\4\5\4\8\1\1\w\g\d\4\n\e\t\1\v\w\u\n\j\p\0\u\u\j\t\z\y\7\1\s\8\o\r\n\d\0\s\9\n\b\z\2\y\5\z\3\m\l\0\j\4\t\a\x\o\h\3\m\w\l\x\o\f\2\g\i\q\d\4\z\x\z\k\e\e\3\9\w\6\3\v\o\e\d\g\s\0\u\2\8\e\i\s\r\u\6\3\2\9\7\u\6\e\6\q\5\w\x\1\4\j\i\t\8\3\z\o\n\d\b\r\a\p\5\e\c\y\l\z\6\e\5\1\y\g\w\r\t\x\6\w\1\z\s\k\p\r\r\1\u\0\9\1\z\p\p\0\e\2\e\4\o\r\s\9\f\o\z\w\h\l\g\8\h\n\j\k\s\t\g\s\3\p\9\v\w\p\h\w\c\7\k\m\u\j\r\q\s\l\5\o\2\i\u\a\g\x\a\z\v\i\v\v\6\h\r\a\b\9\e\x\h\5\0\9\8\7\v\o\l\w\g\m\v\i\7\1\2\0\f\b\s\v\m\3\y\j\c\c\9\g\8\2\w\0\8\9\e\3\p\c\6\t\3\g\j\c\x\5\x\3\3\4\a\q\u\z\c\i\e\y\7\u\g\v\b\q\c\s\z\c\g\p\h\n\d\z\a\4\4\s\y\5\l\9\t\i\e\5\9\o\n\k\l\m\3\z\r\z\t\p\6\l\y\g\z\s\x\1\3\d\r\9\r\p\k\p\4\r\s\j\e\h\j\3\c\x\m\5\p\q\m\a\5\y\x\1\y\k\o\4\m\2\v\h\v\5\u\2\9\s\m\s\7\y\7\o\v\t\1\2\z\y\i\q\g\x\i\n\p\0\6\a\j\z\2\o\y\l\j\6\7\g\s\8\f\i\f\y\j\q\h\e\i\0\3\h\5\k\b\c\f\d\d\a\1\8\k\w\w\l\8\6\s\e\6\4\k\e\4\q\g\7\v\5\i\c\6\v\l\0\y\2\a\d\1\k\e\9\6\y\3\j\9\j\h\x\r\w\a\c\u\m\e\o\t\f\s\h\v\p\1\g\n\f\5\9\h\6\p\n ]] 00:07:46.035 00:07:46.035 real 0m1.309s 00:07:46.035 user 0m0.880s 00:07:46.035 sys 0m0.641s 00:07:46.035 10:01:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.035 10:01:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:46.035 ************************************ 00:07:46.035 END TEST dd_rw_offset 00:07:46.035 ************************************ 00:07:46.035 10:01:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:46.035 10:01:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:46.035 10:01:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:46.035 10:01:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:46.035 10:01:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:46.035 10:01:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:46.035 10:01:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:46.035 10:01:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:46.035 10:01:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:46.035 10:01:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:46.035 10:01:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:46.035 [2024-11-19 10:01:59.755565] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:46.035 [2024-11-19 10:01:59.755690] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59961 ] 00:07:46.035 { 00:07:46.035 "subsystems": [ 00:07:46.035 { 00:07:46.035 "subsystem": "bdev", 00:07:46.035 "config": [ 00:07:46.035 { 00:07:46.035 "params": { 00:07:46.035 "trtype": "pcie", 00:07:46.035 "traddr": "0000:00:10.0", 00:07:46.035 "name": "Nvme0" 00:07:46.035 }, 00:07:46.035 "method": "bdev_nvme_attach_controller" 00:07:46.035 }, 00:07:46.035 { 00:07:46.035 "method": "bdev_wait_for_examine" 00:07:46.035 } 00:07:46.035 ] 00:07:46.035 } 00:07:46.035 ] 00:07:46.035 } 00:07:46.035 [2024-11-19 10:01:59.896188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.293 [2024-11-19 10:01:59.960429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.293 [2024-11-19 10:02:00.019291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.293  [2024-11-19T10:02:00.441Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:46.552 00:07:46.552 10:02:00 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.552 ************************************ 00:07:46.552 END TEST spdk_dd_basic_rw 00:07:46.552 ************************************ 00:07:46.552 00:07:46.552 real 0m17.251s 00:07:46.552 user 0m12.188s 00:07:46.552 sys 0m6.795s 00:07:46.552 10:02:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.552 10:02:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:46.552 10:02:00 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:46.552 10:02:00 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.552 10:02:00 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.552 10:02:00 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:46.552 ************************************ 00:07:46.552 START TEST spdk_dd_posix 00:07:46.552 ************************************ 00:07:46.552 10:02:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:46.810 * Looking for test storage... 00:07:46.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:46.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.810 --rc genhtml_branch_coverage=1 00:07:46.810 --rc genhtml_function_coverage=1 00:07:46.810 --rc genhtml_legend=1 00:07:46.810 --rc geninfo_all_blocks=1 00:07:46.810 --rc geninfo_unexecuted_blocks=1 00:07:46.810 00:07:46.810 ' 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:46.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.810 --rc genhtml_branch_coverage=1 00:07:46.810 --rc genhtml_function_coverage=1 00:07:46.810 --rc genhtml_legend=1 00:07:46.810 --rc geninfo_all_blocks=1 00:07:46.810 --rc geninfo_unexecuted_blocks=1 00:07:46.810 00:07:46.810 ' 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:46.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.810 --rc genhtml_branch_coverage=1 00:07:46.810 --rc genhtml_function_coverage=1 00:07:46.810 --rc genhtml_legend=1 00:07:46.810 --rc geninfo_all_blocks=1 00:07:46.810 --rc geninfo_unexecuted_blocks=1 00:07:46.810 00:07:46.810 ' 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:46.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.810 --rc genhtml_branch_coverage=1 00:07:46.810 --rc genhtml_function_coverage=1 00:07:46.810 --rc genhtml_legend=1 00:07:46.810 --rc geninfo_all_blocks=1 00:07:46.810 --rc geninfo_unexecuted_blocks=1 00:07:46.810 00:07:46.810 ' 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:46.810 10:02:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:46.811 10:02:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.811 10:02:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:46.811 10:02:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:46.811 * First test run, liburing in use 00:07:46.811 10:02:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:46.811 10:02:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.811 10:02:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.811 10:02:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:46.811 ************************************ 00:07:46.811 START TEST dd_flag_append 00:07:46.811 ************************************ 00:07:46.811 10:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:07:46.811 10:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:46.811 10:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:46.811 10:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:46.811 10:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:46.811 10:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:46.811 10:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=lu7bfez3n8zfyvtmpvttht2lzq9uanl4 00:07:46.811 10:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:46.811 10:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:46.811 10:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:46.811 10:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=it66c0xifgh8qkfczkca2ds3qfir0z9i 00:07:46.811 10:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s lu7bfez3n8zfyvtmpvttht2lzq9uanl4 00:07:46.811 10:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s it66c0xifgh8qkfczkca2ds3qfir0z9i 00:07:46.811 10:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:46.811 [2024-11-19 10:02:00.681689] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:46.811 [2024-11-19 10:02:00.682110] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60033 ] 00:07:47.068 [2024-11-19 10:02:00.829182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.068 [2024-11-19 10:02:00.894309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.068 [2024-11-19 10:02:00.948915] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.326  [2024-11-19T10:02:01.215Z] Copying: 32/32 [B] (average 31 kBps) 00:07:47.326 00:07:47.326 ************************************ 00:07:47.326 END TEST dd_flag_append 00:07:47.326 ************************************ 00:07:47.326 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ it66c0xifgh8qkfczkca2ds3qfir0z9ilu7bfez3n8zfyvtmpvttht2lzq9uanl4 == \i\t\6\6\c\0\x\i\f\g\h\8\q\k\f\c\z\k\c\a\2\d\s\3\q\f\i\r\0\z\9\i\l\u\7\b\f\e\z\3\n\8\z\f\y\v\t\m\p\v\t\t\h\t\2\l\z\q\9\u\a\n\l\4 ]] 00:07:47.326 00:07:47.326 real 0m0.579s 00:07:47.326 user 0m0.324s 00:07:47.326 sys 0m0.274s 00:07:47.326 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.326 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:47.584 10:02:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:47.584 10:02:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.584 10:02:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.584 10:02:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:47.584 ************************************ 00:07:47.584 START TEST dd_flag_directory 00:07:47.584 ************************************ 00:07:47.584 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:07:47.584 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:47.584 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:47.584 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:47.584 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.584 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:47.584 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.584 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:47.584 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.584 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:47.584 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.584 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:47.584 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:47.584 [2024-11-19 10:02:01.302496] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:47.584 [2024-11-19 10:02:01.302601] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60056 ] 00:07:47.584 [2024-11-19 10:02:01.443726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.842 [2024-11-19 10:02:01.509766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.842 [2024-11-19 10:02:01.565187] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.842 [2024-11-19 10:02:01.604364] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:47.842 [2024-11-19 10:02:01.604423] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:47.842 [2024-11-19 10:02:01.604457] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:48.100 [2024-11-19 10:02:01.733533] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:48.100 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:48.100 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:48.100 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:48.100 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:48.100 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:48.100 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:48.100 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:48.100 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:48.100 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:48.100 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.100 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:48.100 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.100 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:48.100 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.100 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:48.100 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.100 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:48.101 10:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:48.101 [2024-11-19 10:02:01.867531] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:48.101 [2024-11-19 10:02:01.867790] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60071 ] 00:07:48.359 [2024-11-19 10:02:02.021904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.359 [2024-11-19 10:02:02.082275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.359 [2024-11-19 10:02:02.135031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.359 [2024-11-19 10:02:02.172276] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:48.359 [2024-11-19 10:02:02.172352] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:48.359 [2024-11-19 10:02:02.172389] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:48.620 [2024-11-19 10:02:02.288891] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:48.620 00:07:48.620 real 0m1.120s 00:07:48.620 user 0m0.607s 00:07:48.620 sys 0m0.299s 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:48.620 ************************************ 00:07:48.620 END TEST dd_flag_directory 00:07:48.620 ************************************ 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:48.620 ************************************ 00:07:48.620 START TEST dd_flag_nofollow 00:07:48.620 ************************************ 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:48.620 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.620 [2024-11-19 10:02:02.487296] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:48.620 [2024-11-19 10:02:02.487435] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60099 ] 00:07:48.878 [2024-11-19 10:02:02.633528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.878 [2024-11-19 10:02:02.693178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.878 [2024-11-19 10:02:02.751311] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.136 [2024-11-19 10:02:02.788745] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:49.136 [2024-11-19 10:02:02.788828] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:49.136 [2024-11-19 10:02:02.788862] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:49.136 [2024-11-19 10:02:02.910003] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:49.136 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:49.136 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:49.136 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:49.136 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:49.136 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:49.136 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:49.136 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:49.136 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:49.136 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:49.136 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.136 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.136 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.136 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.136 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.136 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.136 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.136 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:49.136 10:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:49.395 [2024-11-19 10:02:03.054034] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:49.395 [2024-11-19 10:02:03.054165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60109 ] 00:07:49.395 [2024-11-19 10:02:03.198876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.395 [2024-11-19 10:02:03.257373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.654 [2024-11-19 10:02:03.312481] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.654 [2024-11-19 10:02:03.350810] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:49.654 [2024-11-19 10:02:03.350886] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:49.654 [2024-11-19 10:02:03.350923] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:49.654 [2024-11-19 10:02:03.475641] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:49.913 10:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:49.913 10:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:49.913 10:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:49.913 10:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:49.913 10:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:49.913 10:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:49.913 10:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:49.913 10:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:49.913 10:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:49.913 10:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:49.913 [2024-11-19 10:02:03.607567] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:49.913 [2024-11-19 10:02:03.607675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60122 ] 00:07:49.913 [2024-11-19 10:02:03.747641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.172 [2024-11-19 10:02:03.814293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.172 [2024-11-19 10:02:03.868689] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.172  [2024-11-19T10:02:04.320Z] Copying: 512/512 [B] (average 500 kBps) 00:07:50.431 00:07:50.431 10:02:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ fcui4937pbrwxd1vdknupdh245tlax354emf3yodieojhjr6ukhk2l9rm5vdcvgc1zre7mstyjcg6txc4dekq173ycp24n54w4re759wnn0l8gg377vvyihzsoltj57h6tu7k337kof3ex92c1anq3evpo26kw6f9iuxf4rslbn9dxyzlu10uvpl3xem6yucsov6k9q82m98qysaxifp6jq7y5by08tib8t6ym03vabwjn6lvudgjhxlo0rvebdcxgm6tpff9mzennmf3jr6paau6h383sl5c40koscq7o0zs3vopc8r8cvjr5bh600xdbvzxfcxu9c3w6ue6ds3eh8gnf43e5qkjkrcf8de9sg1xtj3tozyypvkolbk5l4tbbvov4m04f5rl89ilq8k60edjlqqycx7y8m4z2fy63xyrfh0r1nqz75w2h88jcifs0kt9fvndg44emj56fsuouvkucdb3bzg3esozcrdugmrbez8l8awpj9ekzyvyhq3 == \f\c\u\i\4\9\3\7\p\b\r\w\x\d\1\v\d\k\n\u\p\d\h\2\4\5\t\l\a\x\3\5\4\e\m\f\3\y\o\d\i\e\o\j\h\j\r\6\u\k\h\k\2\l\9\r\m\5\v\d\c\v\g\c\1\z\r\e\7\m\s\t\y\j\c\g\6\t\x\c\4\d\e\k\q\1\7\3\y\c\p\2\4\n\5\4\w\4\r\e\7\5\9\w\n\n\0\l\8\g\g\3\7\7\v\v\y\i\h\z\s\o\l\t\j\5\7\h\6\t\u\7\k\3\3\7\k\o\f\3\e\x\9\2\c\1\a\n\q\3\e\v\p\o\2\6\k\w\6\f\9\i\u\x\f\4\r\s\l\b\n\9\d\x\y\z\l\u\1\0\u\v\p\l\3\x\e\m\6\y\u\c\s\o\v\6\k\9\q\8\2\m\9\8\q\y\s\a\x\i\f\p\6\j\q\7\y\5\b\y\0\8\t\i\b\8\t\6\y\m\0\3\v\a\b\w\j\n\6\l\v\u\d\g\j\h\x\l\o\0\r\v\e\b\d\c\x\g\m\6\t\p\f\f\9\m\z\e\n\n\m\f\3\j\r\6\p\a\a\u\6\h\3\8\3\s\l\5\c\4\0\k\o\s\c\q\7\o\0\z\s\3\v\o\p\c\8\r\8\c\v\j\r\5\b\h\6\0\0\x\d\b\v\z\x\f\c\x\u\9\c\3\w\6\u\e\6\d\s\3\e\h\8\g\n\f\4\3\e\5\q\k\j\k\r\c\f\8\d\e\9\s\g\1\x\t\j\3\t\o\z\y\y\p\v\k\o\l\b\k\5\l\4\t\b\b\v\o\v\4\m\0\4\f\5\r\l\8\9\i\l\q\8\k\6\0\e\d\j\l\q\q\y\c\x\7\y\8\m\4\z\2\f\y\6\3\x\y\r\f\h\0\r\1\n\q\z\7\5\w\2\h\8\8\j\c\i\f\s\0\k\t\9\f\v\n\d\g\4\4\e\m\j\5\6\f\s\u\o\u\v\k\u\c\d\b\3\b\z\g\3\e\s\o\z\c\r\d\u\g\m\r\b\e\z\8\l\8\a\w\p\j\9\e\k\z\y\v\y\h\q\3 ]] 00:07:50.431 00:07:50.431 real 0m1.689s 00:07:50.431 user 0m0.951s 00:07:50.431 sys 0m0.554s 00:07:50.431 10:02:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.431 ************************************ 00:07:50.431 END TEST dd_flag_nofollow 00:07:50.431 10:02:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:50.431 ************************************ 00:07:50.431 10:02:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:50.431 10:02:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.431 10:02:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.431 10:02:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:50.431 ************************************ 00:07:50.431 START TEST dd_flag_noatime 00:07:50.431 ************************************ 00:07:50.431 10:02:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:07:50.431 10:02:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:50.432 10:02:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:50.432 10:02:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:50.432 10:02:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:50.432 10:02:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:50.432 10:02:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.432 10:02:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732010523 00:07:50.432 10:02:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.432 10:02:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732010524 00:07:50.432 10:02:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:51.368 10:02:05 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.368 [2024-11-19 10:02:05.252433] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:51.368 [2024-11-19 10:02:05.252548] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60161 ] 00:07:51.627 [2024-11-19 10:02:05.398486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.627 [2024-11-19 10:02:05.440283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.627 [2024-11-19 10:02:05.493762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.885  [2024-11-19T10:02:05.774Z] Copying: 512/512 [B] (average 500 kBps) 00:07:51.885 00:07:51.885 10:02:05 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:51.885 10:02:05 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732010523 )) 00:07:51.885 10:02:05 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.885 10:02:05 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732010524 )) 00:07:51.885 10:02:05 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.144 [2024-11-19 10:02:05.787614] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:52.144 [2024-11-19 10:02:05.787727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60180 ] 00:07:52.144 [2024-11-19 10:02:05.932039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.144 [2024-11-19 10:02:05.988912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.402 [2024-11-19 10:02:06.044720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.402  [2024-11-19T10:02:06.291Z] Copying: 512/512 [B] (average 500 kBps) 00:07:52.402 00:07:52.402 10:02:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:52.402 10:02:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732010526 )) 00:07:52.402 00:07:52.402 real 0m2.097s 00:07:52.402 user 0m0.564s 00:07:52.402 sys 0m0.587s 00:07:52.402 10:02:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.402 10:02:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:52.402 ************************************ 00:07:52.402 END TEST dd_flag_noatime 00:07:52.402 ************************************ 00:07:52.661 10:02:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:52.661 10:02:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.661 10:02:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.661 10:02:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:52.661 ************************************ 00:07:52.661 START TEST dd_flags_misc 00:07:52.661 ************************************ 00:07:52.661 10:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:07:52.661 10:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:52.661 10:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:52.661 10:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:52.661 10:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:52.661 10:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:52.661 10:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:52.661 10:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:52.661 10:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:52.661 10:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:52.661 [2024-11-19 10:02:06.389057] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:52.661 [2024-11-19 10:02:06.389171] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60203 ] 00:07:52.661 [2024-11-19 10:02:06.534578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.919 [2024-11-19 10:02:06.598100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.919 [2024-11-19 10:02:06.654976] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.919  [2024-11-19T10:02:07.067Z] Copying: 512/512 [B] (average 500 kBps) 00:07:53.178 00:07:53.178 10:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ g439a3p4kbzhythi5bsu4y2ya5kg8mna1t610d8yihhyr8yg47ojvtmdvbbu42s1qlytjbuxki1pek0agzumpsdpb14x84d21rc85k2al2w7bzkqvr1sg7nk6njy299bf9elu3ldfq32riyxkae4mfavhyzypyz3jruac42wk5ci82ls4q9crr5ayhr891ld47cvkhulekceyu0hdykk4jsrite0eub9eop3wghvxcsjjdztb8xxiqn4mm3oc01e7ny3yg6hizjfw5x6omy1x7425odgi3cde54xisfemf8umg0fys7gwbnla02zs3syw2dcuporcwho43osmh7zezdqgxacaj1pvtleogaf32rls1i70f8hp749hq6nmmpndo9r8lwhffh05bohz9tsahf1rfbayiamgfqe4wkzm6voymc2am0ylptwi8wky92a10pcaet51eb2jorv18dcstckazd1nxzz47t8ja6a8ixovz3v7odxdois6n9k77zx == \g\4\3\9\a\3\p\4\k\b\z\h\y\t\h\i\5\b\s\u\4\y\2\y\a\5\k\g\8\m\n\a\1\t\6\1\0\d\8\y\i\h\h\y\r\8\y\g\4\7\o\j\v\t\m\d\v\b\b\u\4\2\s\1\q\l\y\t\j\b\u\x\k\i\1\p\e\k\0\a\g\z\u\m\p\s\d\p\b\1\4\x\8\4\d\2\1\r\c\8\5\k\2\a\l\2\w\7\b\z\k\q\v\r\1\s\g\7\n\k\6\n\j\y\2\9\9\b\f\9\e\l\u\3\l\d\f\q\3\2\r\i\y\x\k\a\e\4\m\f\a\v\h\y\z\y\p\y\z\3\j\r\u\a\c\4\2\w\k\5\c\i\8\2\l\s\4\q\9\c\r\r\5\a\y\h\r\8\9\1\l\d\4\7\c\v\k\h\u\l\e\k\c\e\y\u\0\h\d\y\k\k\4\j\s\r\i\t\e\0\e\u\b\9\e\o\p\3\w\g\h\v\x\c\s\j\j\d\z\t\b\8\x\x\i\q\n\4\m\m\3\o\c\0\1\e\7\n\y\3\y\g\6\h\i\z\j\f\w\5\x\6\o\m\y\1\x\7\4\2\5\o\d\g\i\3\c\d\e\5\4\x\i\s\f\e\m\f\8\u\m\g\0\f\y\s\7\g\w\b\n\l\a\0\2\z\s\3\s\y\w\2\d\c\u\p\o\r\c\w\h\o\4\3\o\s\m\h\7\z\e\z\d\q\g\x\a\c\a\j\1\p\v\t\l\e\o\g\a\f\3\2\r\l\s\1\i\7\0\f\8\h\p\7\4\9\h\q\6\n\m\m\p\n\d\o\9\r\8\l\w\h\f\f\h\0\5\b\o\h\z\9\t\s\a\h\f\1\r\f\b\a\y\i\a\m\g\f\q\e\4\w\k\z\m\6\v\o\y\m\c\2\a\m\0\y\l\p\t\w\i\8\w\k\y\9\2\a\1\0\p\c\a\e\t\5\1\e\b\2\j\o\r\v\1\8\d\c\s\t\c\k\a\z\d\1\n\x\z\z\4\7\t\8\j\a\6\a\8\i\x\o\v\z\3\v\7\o\d\x\d\o\i\s\6\n\9\k\7\7\z\x ]] 00:07:53.178 10:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:53.178 10:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:53.178 [2024-11-19 10:02:06.953061] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:53.178 [2024-11-19 10:02:06.953187] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60218 ] 00:07:53.437 [2024-11-19 10:02:07.095565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.437 [2024-11-19 10:02:07.153899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.437 [2024-11-19 10:02:07.205860] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.437  [2024-11-19T10:02:07.585Z] Copying: 512/512 [B] (average 500 kBps) 00:07:53.696 00:07:53.696 10:02:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ g439a3p4kbzhythi5bsu4y2ya5kg8mna1t610d8yihhyr8yg47ojvtmdvbbu42s1qlytjbuxki1pek0agzumpsdpb14x84d21rc85k2al2w7bzkqvr1sg7nk6njy299bf9elu3ldfq32riyxkae4mfavhyzypyz3jruac42wk5ci82ls4q9crr5ayhr891ld47cvkhulekceyu0hdykk4jsrite0eub9eop3wghvxcsjjdztb8xxiqn4mm3oc01e7ny3yg6hizjfw5x6omy1x7425odgi3cde54xisfemf8umg0fys7gwbnla02zs3syw2dcuporcwho43osmh7zezdqgxacaj1pvtleogaf32rls1i70f8hp749hq6nmmpndo9r8lwhffh05bohz9tsahf1rfbayiamgfqe4wkzm6voymc2am0ylptwi8wky92a10pcaet51eb2jorv18dcstckazd1nxzz47t8ja6a8ixovz3v7odxdois6n9k77zx == \g\4\3\9\a\3\p\4\k\b\z\h\y\t\h\i\5\b\s\u\4\y\2\y\a\5\k\g\8\m\n\a\1\t\6\1\0\d\8\y\i\h\h\y\r\8\y\g\4\7\o\j\v\t\m\d\v\b\b\u\4\2\s\1\q\l\y\t\j\b\u\x\k\i\1\p\e\k\0\a\g\z\u\m\p\s\d\p\b\1\4\x\8\4\d\2\1\r\c\8\5\k\2\a\l\2\w\7\b\z\k\q\v\r\1\s\g\7\n\k\6\n\j\y\2\9\9\b\f\9\e\l\u\3\l\d\f\q\3\2\r\i\y\x\k\a\e\4\m\f\a\v\h\y\z\y\p\y\z\3\j\r\u\a\c\4\2\w\k\5\c\i\8\2\l\s\4\q\9\c\r\r\5\a\y\h\r\8\9\1\l\d\4\7\c\v\k\h\u\l\e\k\c\e\y\u\0\h\d\y\k\k\4\j\s\r\i\t\e\0\e\u\b\9\e\o\p\3\w\g\h\v\x\c\s\j\j\d\z\t\b\8\x\x\i\q\n\4\m\m\3\o\c\0\1\e\7\n\y\3\y\g\6\h\i\z\j\f\w\5\x\6\o\m\y\1\x\7\4\2\5\o\d\g\i\3\c\d\e\5\4\x\i\s\f\e\m\f\8\u\m\g\0\f\y\s\7\g\w\b\n\l\a\0\2\z\s\3\s\y\w\2\d\c\u\p\o\r\c\w\h\o\4\3\o\s\m\h\7\z\e\z\d\q\g\x\a\c\a\j\1\p\v\t\l\e\o\g\a\f\3\2\r\l\s\1\i\7\0\f\8\h\p\7\4\9\h\q\6\n\m\m\p\n\d\o\9\r\8\l\w\h\f\f\h\0\5\b\o\h\z\9\t\s\a\h\f\1\r\f\b\a\y\i\a\m\g\f\q\e\4\w\k\z\m\6\v\o\y\m\c\2\a\m\0\y\l\p\t\w\i\8\w\k\y\9\2\a\1\0\p\c\a\e\t\5\1\e\b\2\j\o\r\v\1\8\d\c\s\t\c\k\a\z\d\1\n\x\z\z\4\7\t\8\j\a\6\a\8\i\x\o\v\z\3\v\7\o\d\x\d\o\i\s\6\n\9\k\7\7\z\x ]] 00:07:53.696 10:02:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:53.696 10:02:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:53.696 [2024-11-19 10:02:07.496602] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:53.696 [2024-11-19 10:02:07.496727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60222 ] 00:07:53.956 [2024-11-19 10:02:07.643006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.956 [2024-11-19 10:02:07.706663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.956 [2024-11-19 10:02:07.761197] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.956  [2024-11-19T10:02:08.104Z] Copying: 512/512 [B] (average 250 kBps) 00:07:54.215 00:07:54.215 10:02:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ g439a3p4kbzhythi5bsu4y2ya5kg8mna1t610d8yihhyr8yg47ojvtmdvbbu42s1qlytjbuxki1pek0agzumpsdpb14x84d21rc85k2al2w7bzkqvr1sg7nk6njy299bf9elu3ldfq32riyxkae4mfavhyzypyz3jruac42wk5ci82ls4q9crr5ayhr891ld47cvkhulekceyu0hdykk4jsrite0eub9eop3wghvxcsjjdztb8xxiqn4mm3oc01e7ny3yg6hizjfw5x6omy1x7425odgi3cde54xisfemf8umg0fys7gwbnla02zs3syw2dcuporcwho43osmh7zezdqgxacaj1pvtleogaf32rls1i70f8hp749hq6nmmpndo9r8lwhffh05bohz9tsahf1rfbayiamgfqe4wkzm6voymc2am0ylptwi8wky92a10pcaet51eb2jorv18dcstckazd1nxzz47t8ja6a8ixovz3v7odxdois6n9k77zx == \g\4\3\9\a\3\p\4\k\b\z\h\y\t\h\i\5\b\s\u\4\y\2\y\a\5\k\g\8\m\n\a\1\t\6\1\0\d\8\y\i\h\h\y\r\8\y\g\4\7\o\j\v\t\m\d\v\b\b\u\4\2\s\1\q\l\y\t\j\b\u\x\k\i\1\p\e\k\0\a\g\z\u\m\p\s\d\p\b\1\4\x\8\4\d\2\1\r\c\8\5\k\2\a\l\2\w\7\b\z\k\q\v\r\1\s\g\7\n\k\6\n\j\y\2\9\9\b\f\9\e\l\u\3\l\d\f\q\3\2\r\i\y\x\k\a\e\4\m\f\a\v\h\y\z\y\p\y\z\3\j\r\u\a\c\4\2\w\k\5\c\i\8\2\l\s\4\q\9\c\r\r\5\a\y\h\r\8\9\1\l\d\4\7\c\v\k\h\u\l\e\k\c\e\y\u\0\h\d\y\k\k\4\j\s\r\i\t\e\0\e\u\b\9\e\o\p\3\w\g\h\v\x\c\s\j\j\d\z\t\b\8\x\x\i\q\n\4\m\m\3\o\c\0\1\e\7\n\y\3\y\g\6\h\i\z\j\f\w\5\x\6\o\m\y\1\x\7\4\2\5\o\d\g\i\3\c\d\e\5\4\x\i\s\f\e\m\f\8\u\m\g\0\f\y\s\7\g\w\b\n\l\a\0\2\z\s\3\s\y\w\2\d\c\u\p\o\r\c\w\h\o\4\3\o\s\m\h\7\z\e\z\d\q\g\x\a\c\a\j\1\p\v\t\l\e\o\g\a\f\3\2\r\l\s\1\i\7\0\f\8\h\p\7\4\9\h\q\6\n\m\m\p\n\d\o\9\r\8\l\w\h\f\f\h\0\5\b\o\h\z\9\t\s\a\h\f\1\r\f\b\a\y\i\a\m\g\f\q\e\4\w\k\z\m\6\v\o\y\m\c\2\a\m\0\y\l\p\t\w\i\8\w\k\y\9\2\a\1\0\p\c\a\e\t\5\1\e\b\2\j\o\r\v\1\8\d\c\s\t\c\k\a\z\d\1\n\x\z\z\4\7\t\8\j\a\6\a\8\i\x\o\v\z\3\v\7\o\d\x\d\o\i\s\6\n\9\k\7\7\z\x ]] 00:07:54.215 10:02:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:54.215 10:02:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:54.215 [2024-11-19 10:02:08.064614] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:54.215 [2024-11-19 10:02:08.064761] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60237 ] 00:07:54.474 [2024-11-19 10:02:08.210487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.474 [2024-11-19 10:02:08.262549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.474 [2024-11-19 10:02:08.315770] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.474  [2024-11-19T10:02:08.622Z] Copying: 512/512 [B] (average 500 kBps) 00:07:54.733 00:07:54.733 10:02:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ g439a3p4kbzhythi5bsu4y2ya5kg8mna1t610d8yihhyr8yg47ojvtmdvbbu42s1qlytjbuxki1pek0agzumpsdpb14x84d21rc85k2al2w7bzkqvr1sg7nk6njy299bf9elu3ldfq32riyxkae4mfavhyzypyz3jruac42wk5ci82ls4q9crr5ayhr891ld47cvkhulekceyu0hdykk4jsrite0eub9eop3wghvxcsjjdztb8xxiqn4mm3oc01e7ny3yg6hizjfw5x6omy1x7425odgi3cde54xisfemf8umg0fys7gwbnla02zs3syw2dcuporcwho43osmh7zezdqgxacaj1pvtleogaf32rls1i70f8hp749hq6nmmpndo9r8lwhffh05bohz9tsahf1rfbayiamgfqe4wkzm6voymc2am0ylptwi8wky92a10pcaet51eb2jorv18dcstckazd1nxzz47t8ja6a8ixovz3v7odxdois6n9k77zx == \g\4\3\9\a\3\p\4\k\b\z\h\y\t\h\i\5\b\s\u\4\y\2\y\a\5\k\g\8\m\n\a\1\t\6\1\0\d\8\y\i\h\h\y\r\8\y\g\4\7\o\j\v\t\m\d\v\b\b\u\4\2\s\1\q\l\y\t\j\b\u\x\k\i\1\p\e\k\0\a\g\z\u\m\p\s\d\p\b\1\4\x\8\4\d\2\1\r\c\8\5\k\2\a\l\2\w\7\b\z\k\q\v\r\1\s\g\7\n\k\6\n\j\y\2\9\9\b\f\9\e\l\u\3\l\d\f\q\3\2\r\i\y\x\k\a\e\4\m\f\a\v\h\y\z\y\p\y\z\3\j\r\u\a\c\4\2\w\k\5\c\i\8\2\l\s\4\q\9\c\r\r\5\a\y\h\r\8\9\1\l\d\4\7\c\v\k\h\u\l\e\k\c\e\y\u\0\h\d\y\k\k\4\j\s\r\i\t\e\0\e\u\b\9\e\o\p\3\w\g\h\v\x\c\s\j\j\d\z\t\b\8\x\x\i\q\n\4\m\m\3\o\c\0\1\e\7\n\y\3\y\g\6\h\i\z\j\f\w\5\x\6\o\m\y\1\x\7\4\2\5\o\d\g\i\3\c\d\e\5\4\x\i\s\f\e\m\f\8\u\m\g\0\f\y\s\7\g\w\b\n\l\a\0\2\z\s\3\s\y\w\2\d\c\u\p\o\r\c\w\h\o\4\3\o\s\m\h\7\z\e\z\d\q\g\x\a\c\a\j\1\p\v\t\l\e\o\g\a\f\3\2\r\l\s\1\i\7\0\f\8\h\p\7\4\9\h\q\6\n\m\m\p\n\d\o\9\r\8\l\w\h\f\f\h\0\5\b\o\h\z\9\t\s\a\h\f\1\r\f\b\a\y\i\a\m\g\f\q\e\4\w\k\z\m\6\v\o\y\m\c\2\a\m\0\y\l\p\t\w\i\8\w\k\y\9\2\a\1\0\p\c\a\e\t\5\1\e\b\2\j\o\r\v\1\8\d\c\s\t\c\k\a\z\d\1\n\x\z\z\4\7\t\8\j\a\6\a\8\i\x\o\v\z\3\v\7\o\d\x\d\o\i\s\6\n\9\k\7\7\z\x ]] 00:07:54.733 10:02:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:54.733 10:02:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:54.733 10:02:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:54.733 10:02:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:54.733 10:02:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:54.733 10:02:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:54.733 [2024-11-19 10:02:08.609403] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:54.733 [2024-11-19 10:02:08.609538] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60242 ] 00:07:54.992 [2024-11-19 10:02:08.757682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.992 [2024-11-19 10:02:08.818173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.992 [2024-11-19 10:02:08.872285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.251  [2024-11-19T10:02:09.140Z] Copying: 512/512 [B] (average 500 kBps) 00:07:55.251 00:07:55.251 10:02:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ l1kc28o0yw6ey29isb593ty39i96yrll0ne89zkjlk50ks5477gfsgwpmjamr7nudlqxuthxc07hzl2bvc4jyy0r7j0d773umrzwc53vqadcy9eesp1rn5h6hn8oenje9o7amj76ehkw2ea3mzu48cpjef41ubrvc0muzc0g0i6k1qggtaeavu3wi5izahj8ebgojcdbodx2ouszcj7fb1xkucmflwnszvgwljwcr6r19mi1duk1760ivz8bbszjg22f2v8kuoqbc9kw0ixiaq4tqeln1vzuqej7xfapayhun1mw9jnd0hhzqx8q74dpvynac5p1un7832dmmo7b4epjamq4gns9h9cl5vdylohmvvdxli2v6hi9eze1pttrkmnk8pvy49zipew8h13k6dk4de6ai4hrhs5vixg6ahsfkc81rvz6cnxtftfscgmdjgfgvxpv2af2u4xua4h08uibqm4v8lnuoljikk4dc9ed3bhdshbbc0e1p023zelj == \l\1\k\c\2\8\o\0\y\w\6\e\y\2\9\i\s\b\5\9\3\t\y\3\9\i\9\6\y\r\l\l\0\n\e\8\9\z\k\j\l\k\5\0\k\s\5\4\7\7\g\f\s\g\w\p\m\j\a\m\r\7\n\u\d\l\q\x\u\t\h\x\c\0\7\h\z\l\2\b\v\c\4\j\y\y\0\r\7\j\0\d\7\7\3\u\m\r\z\w\c\5\3\v\q\a\d\c\y\9\e\e\s\p\1\r\n\5\h\6\h\n\8\o\e\n\j\e\9\o\7\a\m\j\7\6\e\h\k\w\2\e\a\3\m\z\u\4\8\c\p\j\e\f\4\1\u\b\r\v\c\0\m\u\z\c\0\g\0\i\6\k\1\q\g\g\t\a\e\a\v\u\3\w\i\5\i\z\a\h\j\8\e\b\g\o\j\c\d\b\o\d\x\2\o\u\s\z\c\j\7\f\b\1\x\k\u\c\m\f\l\w\n\s\z\v\g\w\l\j\w\c\r\6\r\1\9\m\i\1\d\u\k\1\7\6\0\i\v\z\8\b\b\s\z\j\g\2\2\f\2\v\8\k\u\o\q\b\c\9\k\w\0\i\x\i\a\q\4\t\q\e\l\n\1\v\z\u\q\e\j\7\x\f\a\p\a\y\h\u\n\1\m\w\9\j\n\d\0\h\h\z\q\x\8\q\7\4\d\p\v\y\n\a\c\5\p\1\u\n\7\8\3\2\d\m\m\o\7\b\4\e\p\j\a\m\q\4\g\n\s\9\h\9\c\l\5\v\d\y\l\o\h\m\v\v\d\x\l\i\2\v\6\h\i\9\e\z\e\1\p\t\t\r\k\m\n\k\8\p\v\y\4\9\z\i\p\e\w\8\h\1\3\k\6\d\k\4\d\e\6\a\i\4\h\r\h\s\5\v\i\x\g\6\a\h\s\f\k\c\8\1\r\v\z\6\c\n\x\t\f\t\f\s\c\g\m\d\j\g\f\g\v\x\p\v\2\a\f\2\u\4\x\u\a\4\h\0\8\u\i\b\q\m\4\v\8\l\n\u\o\l\j\i\k\k\4\d\c\9\e\d\3\b\h\d\s\h\b\b\c\0\e\1\p\0\2\3\z\e\l\j ]] 00:07:55.251 10:02:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:55.251 10:02:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:55.509 [2024-11-19 10:02:09.180898] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:55.509 [2024-11-19 10:02:09.181026] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60260 ] 00:07:55.509 [2024-11-19 10:02:09.322693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.509 [2024-11-19 10:02:09.382890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.767 [2024-11-19 10:02:09.438800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.767  [2024-11-19T10:02:09.656Z] Copying: 512/512 [B] (average 500 kBps) 00:07:55.767 00:07:55.767 10:02:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ l1kc28o0yw6ey29isb593ty39i96yrll0ne89zkjlk50ks5477gfsgwpmjamr7nudlqxuthxc07hzl2bvc4jyy0r7j0d773umrzwc53vqadcy9eesp1rn5h6hn8oenje9o7amj76ehkw2ea3mzu48cpjef41ubrvc0muzc0g0i6k1qggtaeavu3wi5izahj8ebgojcdbodx2ouszcj7fb1xkucmflwnszvgwljwcr6r19mi1duk1760ivz8bbszjg22f2v8kuoqbc9kw0ixiaq4tqeln1vzuqej7xfapayhun1mw9jnd0hhzqx8q74dpvynac5p1un7832dmmo7b4epjamq4gns9h9cl5vdylohmvvdxli2v6hi9eze1pttrkmnk8pvy49zipew8h13k6dk4de6ai4hrhs5vixg6ahsfkc81rvz6cnxtftfscgmdjgfgvxpv2af2u4xua4h08uibqm4v8lnuoljikk4dc9ed3bhdshbbc0e1p023zelj == \l\1\k\c\2\8\o\0\y\w\6\e\y\2\9\i\s\b\5\9\3\t\y\3\9\i\9\6\y\r\l\l\0\n\e\8\9\z\k\j\l\k\5\0\k\s\5\4\7\7\g\f\s\g\w\p\m\j\a\m\r\7\n\u\d\l\q\x\u\t\h\x\c\0\7\h\z\l\2\b\v\c\4\j\y\y\0\r\7\j\0\d\7\7\3\u\m\r\z\w\c\5\3\v\q\a\d\c\y\9\e\e\s\p\1\r\n\5\h\6\h\n\8\o\e\n\j\e\9\o\7\a\m\j\7\6\e\h\k\w\2\e\a\3\m\z\u\4\8\c\p\j\e\f\4\1\u\b\r\v\c\0\m\u\z\c\0\g\0\i\6\k\1\q\g\g\t\a\e\a\v\u\3\w\i\5\i\z\a\h\j\8\e\b\g\o\j\c\d\b\o\d\x\2\o\u\s\z\c\j\7\f\b\1\x\k\u\c\m\f\l\w\n\s\z\v\g\w\l\j\w\c\r\6\r\1\9\m\i\1\d\u\k\1\7\6\0\i\v\z\8\b\b\s\z\j\g\2\2\f\2\v\8\k\u\o\q\b\c\9\k\w\0\i\x\i\a\q\4\t\q\e\l\n\1\v\z\u\q\e\j\7\x\f\a\p\a\y\h\u\n\1\m\w\9\j\n\d\0\h\h\z\q\x\8\q\7\4\d\p\v\y\n\a\c\5\p\1\u\n\7\8\3\2\d\m\m\o\7\b\4\e\p\j\a\m\q\4\g\n\s\9\h\9\c\l\5\v\d\y\l\o\h\m\v\v\d\x\l\i\2\v\6\h\i\9\e\z\e\1\p\t\t\r\k\m\n\k\8\p\v\y\4\9\z\i\p\e\w\8\h\1\3\k\6\d\k\4\d\e\6\a\i\4\h\r\h\s\5\v\i\x\g\6\a\h\s\f\k\c\8\1\r\v\z\6\c\n\x\t\f\t\f\s\c\g\m\d\j\g\f\g\v\x\p\v\2\a\f\2\u\4\x\u\a\4\h\0\8\u\i\b\q\m\4\v\8\l\n\u\o\l\j\i\k\k\4\d\c\9\e\d\3\b\h\d\s\h\b\b\c\0\e\1\p\0\2\3\z\e\l\j ]] 00:07:55.767 10:02:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:55.767 10:02:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:56.026 [2024-11-19 10:02:09.711476] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:56.026 [2024-11-19 10:02:09.711606] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60265 ] 00:07:56.026 [2024-11-19 10:02:09.857232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.026 [2024-11-19 10:02:09.914729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.284 [2024-11-19 10:02:09.966424] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.284  [2024-11-19T10:02:10.431Z] Copying: 512/512 [B] (average 100 kBps) 00:07:56.542 00:07:56.543 10:02:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ l1kc28o0yw6ey29isb593ty39i96yrll0ne89zkjlk50ks5477gfsgwpmjamr7nudlqxuthxc07hzl2bvc4jyy0r7j0d773umrzwc53vqadcy9eesp1rn5h6hn8oenje9o7amj76ehkw2ea3mzu48cpjef41ubrvc0muzc0g0i6k1qggtaeavu3wi5izahj8ebgojcdbodx2ouszcj7fb1xkucmflwnszvgwljwcr6r19mi1duk1760ivz8bbszjg22f2v8kuoqbc9kw0ixiaq4tqeln1vzuqej7xfapayhun1mw9jnd0hhzqx8q74dpvynac5p1un7832dmmo7b4epjamq4gns9h9cl5vdylohmvvdxli2v6hi9eze1pttrkmnk8pvy49zipew8h13k6dk4de6ai4hrhs5vixg6ahsfkc81rvz6cnxtftfscgmdjgfgvxpv2af2u4xua4h08uibqm4v8lnuoljikk4dc9ed3bhdshbbc0e1p023zelj == \l\1\k\c\2\8\o\0\y\w\6\e\y\2\9\i\s\b\5\9\3\t\y\3\9\i\9\6\y\r\l\l\0\n\e\8\9\z\k\j\l\k\5\0\k\s\5\4\7\7\g\f\s\g\w\p\m\j\a\m\r\7\n\u\d\l\q\x\u\t\h\x\c\0\7\h\z\l\2\b\v\c\4\j\y\y\0\r\7\j\0\d\7\7\3\u\m\r\z\w\c\5\3\v\q\a\d\c\y\9\e\e\s\p\1\r\n\5\h\6\h\n\8\o\e\n\j\e\9\o\7\a\m\j\7\6\e\h\k\w\2\e\a\3\m\z\u\4\8\c\p\j\e\f\4\1\u\b\r\v\c\0\m\u\z\c\0\g\0\i\6\k\1\q\g\g\t\a\e\a\v\u\3\w\i\5\i\z\a\h\j\8\e\b\g\o\j\c\d\b\o\d\x\2\o\u\s\z\c\j\7\f\b\1\x\k\u\c\m\f\l\w\n\s\z\v\g\w\l\j\w\c\r\6\r\1\9\m\i\1\d\u\k\1\7\6\0\i\v\z\8\b\b\s\z\j\g\2\2\f\2\v\8\k\u\o\q\b\c\9\k\w\0\i\x\i\a\q\4\t\q\e\l\n\1\v\z\u\q\e\j\7\x\f\a\p\a\y\h\u\n\1\m\w\9\j\n\d\0\h\h\z\q\x\8\q\7\4\d\p\v\y\n\a\c\5\p\1\u\n\7\8\3\2\d\m\m\o\7\b\4\e\p\j\a\m\q\4\g\n\s\9\h\9\c\l\5\v\d\y\l\o\h\m\v\v\d\x\l\i\2\v\6\h\i\9\e\z\e\1\p\t\t\r\k\m\n\k\8\p\v\y\4\9\z\i\p\e\w\8\h\1\3\k\6\d\k\4\d\e\6\a\i\4\h\r\h\s\5\v\i\x\g\6\a\h\s\f\k\c\8\1\r\v\z\6\c\n\x\t\f\t\f\s\c\g\m\d\j\g\f\g\v\x\p\v\2\a\f\2\u\4\x\u\a\4\h\0\8\u\i\b\q\m\4\v\8\l\n\u\o\l\j\i\k\k\4\d\c\9\e\d\3\b\h\d\s\h\b\b\c\0\e\1\p\0\2\3\z\e\l\j ]] 00:07:56.543 10:02:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:56.543 10:02:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:56.543 [2024-11-19 10:02:10.248938] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:56.543 [2024-11-19 10:02:10.249066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60279 ] 00:07:56.543 [2024-11-19 10:02:10.394412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.802 [2024-11-19 10:02:10.457843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.802 [2024-11-19 10:02:10.512772] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.802  [2024-11-19T10:02:10.950Z] Copying: 512/512 [B] (average 166 kBps) 00:07:57.061 00:07:57.061 10:02:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ l1kc28o0yw6ey29isb593ty39i96yrll0ne89zkjlk50ks5477gfsgwpmjamr7nudlqxuthxc07hzl2bvc4jyy0r7j0d773umrzwc53vqadcy9eesp1rn5h6hn8oenje9o7amj76ehkw2ea3mzu48cpjef41ubrvc0muzc0g0i6k1qggtaeavu3wi5izahj8ebgojcdbodx2ouszcj7fb1xkucmflwnszvgwljwcr6r19mi1duk1760ivz8bbszjg22f2v8kuoqbc9kw0ixiaq4tqeln1vzuqej7xfapayhun1mw9jnd0hhzqx8q74dpvynac5p1un7832dmmo7b4epjamq4gns9h9cl5vdylohmvvdxli2v6hi9eze1pttrkmnk8pvy49zipew8h13k6dk4de6ai4hrhs5vixg6ahsfkc81rvz6cnxtftfscgmdjgfgvxpv2af2u4xua4h08uibqm4v8lnuoljikk4dc9ed3bhdshbbc0e1p023zelj == \l\1\k\c\2\8\o\0\y\w\6\e\y\2\9\i\s\b\5\9\3\t\y\3\9\i\9\6\y\r\l\l\0\n\e\8\9\z\k\j\l\k\5\0\k\s\5\4\7\7\g\f\s\g\w\p\m\j\a\m\r\7\n\u\d\l\q\x\u\t\h\x\c\0\7\h\z\l\2\b\v\c\4\j\y\y\0\r\7\j\0\d\7\7\3\u\m\r\z\w\c\5\3\v\q\a\d\c\y\9\e\e\s\p\1\r\n\5\h\6\h\n\8\o\e\n\j\e\9\o\7\a\m\j\7\6\e\h\k\w\2\e\a\3\m\z\u\4\8\c\p\j\e\f\4\1\u\b\r\v\c\0\m\u\z\c\0\g\0\i\6\k\1\q\g\g\t\a\e\a\v\u\3\w\i\5\i\z\a\h\j\8\e\b\g\o\j\c\d\b\o\d\x\2\o\u\s\z\c\j\7\f\b\1\x\k\u\c\m\f\l\w\n\s\z\v\g\w\l\j\w\c\r\6\r\1\9\m\i\1\d\u\k\1\7\6\0\i\v\z\8\b\b\s\z\j\g\2\2\f\2\v\8\k\u\o\q\b\c\9\k\w\0\i\x\i\a\q\4\t\q\e\l\n\1\v\z\u\q\e\j\7\x\f\a\p\a\y\h\u\n\1\m\w\9\j\n\d\0\h\h\z\q\x\8\q\7\4\d\p\v\y\n\a\c\5\p\1\u\n\7\8\3\2\d\m\m\o\7\b\4\e\p\j\a\m\q\4\g\n\s\9\h\9\c\l\5\v\d\y\l\o\h\m\v\v\d\x\l\i\2\v\6\h\i\9\e\z\e\1\p\t\t\r\k\m\n\k\8\p\v\y\4\9\z\i\p\e\w\8\h\1\3\k\6\d\k\4\d\e\6\a\i\4\h\r\h\s\5\v\i\x\g\6\a\h\s\f\k\c\8\1\r\v\z\6\c\n\x\t\f\t\f\s\c\g\m\d\j\g\f\g\v\x\p\v\2\a\f\2\u\4\x\u\a\4\h\0\8\u\i\b\q\m\4\v\8\l\n\u\o\l\j\i\k\k\4\d\c\9\e\d\3\b\h\d\s\h\b\b\c\0\e\1\p\0\2\3\z\e\l\j ]] 00:07:57.061 00:07:57.061 real 0m4.420s 00:07:57.061 user 0m2.414s 00:07:57.061 sys 0m2.248s 00:07:57.061 10:02:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.061 ************************************ 00:07:57.061 END TEST dd_flags_misc 00:07:57.061 ************************************ 00:07:57.061 10:02:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:57.061 10:02:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:57.061 10:02:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:57.061 * Second test run, disabling liburing, forcing AIO 00:07:57.061 10:02:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:57.061 10:02:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:57.061 10:02:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.061 10:02:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.061 10:02:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:57.061 ************************************ 00:07:57.061 START TEST dd_flag_append_forced_aio 00:07:57.061 ************************************ 00:07:57.061 10:02:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:07:57.061 10:02:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:57.061 10:02:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:57.061 10:02:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:57.061 10:02:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:57.061 10:02:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:57.061 10:02:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=wufpc2lf7o56f0iekdx9owd9fx5f3rs0 00:07:57.062 10:02:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:57.062 10:02:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:57.062 10:02:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:57.062 10:02:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=3oug27ilef0u0398a12o7qhlvar6frkq 00:07:57.062 10:02:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s wufpc2lf7o56f0iekdx9owd9fx5f3rs0 00:07:57.062 10:02:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 3oug27ilef0u0398a12o7qhlvar6frkq 00:07:57.062 10:02:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:57.062 [2024-11-19 10:02:10.868280] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:57.062 [2024-11-19 10:02:10.868421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60312 ] 00:07:57.320 [2024-11-19 10:02:11.020744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.320 [2024-11-19 10:02:11.090044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.320 [2024-11-19 10:02:11.149184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.320  [2024-11-19T10:02:11.467Z] Copying: 32/32 [B] (average 31 kBps) 00:07:57.578 00:07:57.578 10:02:11 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 3oug27ilef0u0398a12o7qhlvar6frkqwufpc2lf7o56f0iekdx9owd9fx5f3rs0 == \3\o\u\g\2\7\i\l\e\f\0\u\0\3\9\8\a\1\2\o\7\q\h\l\v\a\r\6\f\r\k\q\w\u\f\p\c\2\l\f\7\o\5\6\f\0\i\e\k\d\x\9\o\w\d\9\f\x\5\f\3\r\s\0 ]] 00:07:57.578 00:07:57.578 real 0m0.595s 00:07:57.578 user 0m0.332s 00:07:57.578 sys 0m0.140s 00:07:57.578 10:02:11 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.578 ************************************ 00:07:57.578 END TEST dd_flag_append_forced_aio 00:07:57.578 10:02:11 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:57.578 ************************************ 00:07:57.578 10:02:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:57.578 10:02:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.578 10:02:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.578 10:02:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:57.578 ************************************ 00:07:57.578 START TEST dd_flag_directory_forced_aio 00:07:57.578 ************************************ 00:07:57.579 10:02:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:07:57.579 10:02:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:57.579 10:02:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:57.579 10:02:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:57.579 10:02:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.579 10:02:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.579 10:02:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.579 10:02:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.579 10:02:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.579 10:02:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.579 10:02:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.579 10:02:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:57.579 10:02:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:57.837 [2024-11-19 10:02:11.519880] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:57.837 [2024-11-19 10:02:11.520019] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60334 ] 00:07:57.837 [2024-11-19 10:02:11.673446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.096 [2024-11-19 10:02:11.752506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.096 [2024-11-19 10:02:11.816128] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.096 [2024-11-19 10:02:11.859559] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:58.096 [2024-11-19 10:02:11.859606] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:58.096 [2024-11-19 10:02:11.859625] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.355 [2024-11-19 10:02:11.993908] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:58.355 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:07:58.355 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:58.355 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:07:58.355 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:58.355 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:58.355 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:58.355 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:58.355 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:58.355 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:58.355 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.355 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.355 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.355 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.355 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.355 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.355 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.355 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:58.355 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:58.355 [2024-11-19 10:02:12.131947] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:58.355 [2024-11-19 10:02:12.132062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60349 ] 00:07:58.618 [2024-11-19 10:02:12.282490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.618 [2024-11-19 10:02:12.342120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.618 [2024-11-19 10:02:12.396160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.618 [2024-11-19 10:02:12.433797] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:58.618 [2024-11-19 10:02:12.433865] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:58.618 [2024-11-19 10:02:12.433884] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.889 [2024-11-19 10:02:12.553761] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:58.889 00:07:58.889 real 0m1.166s 00:07:58.889 user 0m0.647s 00:07:58.889 sys 0m0.307s 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:58.889 ************************************ 00:07:58.889 END TEST dd_flag_directory_forced_aio 00:07:58.889 ************************************ 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:58.889 ************************************ 00:07:58.889 START TEST dd_flag_nofollow_forced_aio 00:07:58.889 ************************************ 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:58.889 10:02:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.889 [2024-11-19 10:02:12.753491] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:58.889 [2024-11-19 10:02:12.753596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60372 ] 00:07:59.148 [2024-11-19 10:02:12.902417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.148 [2024-11-19 10:02:12.965761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.148 [2024-11-19 10:02:13.019879] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.407 [2024-11-19 10:02:13.057606] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:59.407 [2024-11-19 10:02:13.057671] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:59.407 [2024-11-19 10:02:13.057689] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.407 [2024-11-19 10:02:13.180588] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:59.407 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:07:59.407 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.407 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:07:59.407 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:59.407 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:59.407 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.407 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:59.408 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:59.408 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:59.408 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.408 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.408 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.408 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.408 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.408 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.408 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.408 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:59.408 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:59.665 [2024-11-19 10:02:13.321488] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:59.665 [2024-11-19 10:02:13.321596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60387 ] 00:07:59.665 [2024-11-19 10:02:13.464219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.666 [2024-11-19 10:02:13.527319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.924 [2024-11-19 10:02:13.582882] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.924 [2024-11-19 10:02:13.616835] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:59.924 [2024-11-19 10:02:13.616898] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:59.924 [2024-11-19 10:02:13.616916] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.924 [2024-11-19 10:02:13.732093] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:59.924 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:07:59.924 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.924 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:07:59.924 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:59.924 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:59.924 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.924 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:59.924 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:59.924 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:59.924 10:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.183 [2024-11-19 10:02:13.842507] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:00.183 [2024-11-19 10:02:13.842610] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60395 ] 00:08:00.183 [2024-11-19 10:02:13.983618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.183 [2024-11-19 10:02:14.046111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.441 [2024-11-19 10:02:14.101382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.441  [2024-11-19T10:02:14.590Z] Copying: 512/512 [B] (average 500 kBps) 00:08:00.701 00:08:00.701 10:02:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 0r0xdste1a0dymz821q94153pkx9nthzwrsuhscbevx5rq5pe1vjkl4b68caw6qi4gqs0aasam1zvb9ynge8qkztccecx59x3qkvepy1miegbdxnmesbaipna3b6run56v0fu2ccv4ptj1e3a2hxr2drq0jkmrpxth1ybzxbjfcgbjyx4623zffvp74g40gzhclye9gdytu7u5mswajobd1ycltlhkye9xsfwiil4ou6ilcpj8baxufdfflgm4eqlt2rewdb47dgm8xq917i4f30nkz7zkyewmdkor690fxjlqj7hhxp8csk3srx8saidyecvda0us4axqo7xbjhf8mcnfxvj5ymsa37raxqdls7l540dkevdr3nj3i0xp8rhac5j6hwxqa3r4eevl54ovqgwja8h55rij9vrirjfj3svavczpo7m4eala7e5ki6e72kr0gfc8btmr01q7egtltw0ess93wahbb8wwxotu11vh1q936dkhpb0rnj8aw1 == \0\r\0\x\d\s\t\e\1\a\0\d\y\m\z\8\2\1\q\9\4\1\5\3\p\k\x\9\n\t\h\z\w\r\s\u\h\s\c\b\e\v\x\5\r\q\5\p\e\1\v\j\k\l\4\b\6\8\c\a\w\6\q\i\4\g\q\s\0\a\a\s\a\m\1\z\v\b\9\y\n\g\e\8\q\k\z\t\c\c\e\c\x\5\9\x\3\q\k\v\e\p\y\1\m\i\e\g\b\d\x\n\m\e\s\b\a\i\p\n\a\3\b\6\r\u\n\5\6\v\0\f\u\2\c\c\v\4\p\t\j\1\e\3\a\2\h\x\r\2\d\r\q\0\j\k\m\r\p\x\t\h\1\y\b\z\x\b\j\f\c\g\b\j\y\x\4\6\2\3\z\f\f\v\p\7\4\g\4\0\g\z\h\c\l\y\e\9\g\d\y\t\u\7\u\5\m\s\w\a\j\o\b\d\1\y\c\l\t\l\h\k\y\e\9\x\s\f\w\i\i\l\4\o\u\6\i\l\c\p\j\8\b\a\x\u\f\d\f\f\l\g\m\4\e\q\l\t\2\r\e\w\d\b\4\7\d\g\m\8\x\q\9\1\7\i\4\f\3\0\n\k\z\7\z\k\y\e\w\m\d\k\o\r\6\9\0\f\x\j\l\q\j\7\h\h\x\p\8\c\s\k\3\s\r\x\8\s\a\i\d\y\e\c\v\d\a\0\u\s\4\a\x\q\o\7\x\b\j\h\f\8\m\c\n\f\x\v\j\5\y\m\s\a\3\7\r\a\x\q\d\l\s\7\l\5\4\0\d\k\e\v\d\r\3\n\j\3\i\0\x\p\8\r\h\a\c\5\j\6\h\w\x\q\a\3\r\4\e\e\v\l\5\4\o\v\q\g\w\j\a\8\h\5\5\r\i\j\9\v\r\i\r\j\f\j\3\s\v\a\v\c\z\p\o\7\m\4\e\a\l\a\7\e\5\k\i\6\e\7\2\k\r\0\g\f\c\8\b\t\m\r\0\1\q\7\e\g\t\l\t\w\0\e\s\s\9\3\w\a\h\b\b\8\w\w\x\o\t\u\1\1\v\h\1\q\9\3\6\d\k\h\p\b\0\r\n\j\8\a\w\1 ]] 00:08:00.701 00:08:00.701 real 0m1.673s 00:08:00.701 user 0m0.911s 00:08:00.701 sys 0m0.429s 00:08:00.701 10:02:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.701 ************************************ 00:08:00.701 END TEST dd_flag_nofollow_forced_aio 00:08:00.701 ************************************ 00:08:00.701 10:02:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:00.701 10:02:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:00.701 10:02:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.701 10:02:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.701 10:02:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:00.701 ************************************ 00:08:00.701 START TEST dd_flag_noatime_forced_aio 00:08:00.701 ************************************ 00:08:00.701 10:02:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:08:00.701 10:02:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:00.701 10:02:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:00.701 10:02:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:00.701 10:02:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:00.701 10:02:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:00.701 10:02:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:00.701 10:02:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732010534 00:08:00.701 10:02:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.701 10:02:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732010534 00:08:00.701 10:02:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:01.638 10:02:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:01.638 [2024-11-19 10:02:15.503348] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:01.638 [2024-11-19 10:02:15.503453] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60435 ] 00:08:01.896 [2024-11-19 10:02:15.657097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.896 [2024-11-19 10:02:15.721859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.896 [2024-11-19 10:02:15.779822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.155  [2024-11-19T10:02:16.044Z] Copying: 512/512 [B] (average 500 kBps) 00:08:02.155 00:08:02.155 10:02:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.155 10:02:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732010534 )) 00:08:02.155 10:02:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.414 10:02:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732010534 )) 00:08:02.414 10:02:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.414 [2024-11-19 10:02:16.103375] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:02.414 [2024-11-19 10:02:16.103477] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60451 ] 00:08:02.414 [2024-11-19 10:02:16.251603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.414 [2024-11-19 10:02:16.300648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.672 [2024-11-19 10:02:16.358076] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.672  [2024-11-19T10:02:16.820Z] Copying: 512/512 [B] (average 500 kBps) 00:08:02.931 00:08:02.931 10:02:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.931 10:02:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732010536 )) 00:08:02.931 00:08:02.931 real 0m2.202s 00:08:02.931 user 0m0.640s 00:08:02.931 sys 0m0.312s 00:08:02.931 10:02:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.931 ************************************ 00:08:02.931 END TEST dd_flag_noatime_forced_aio 00:08:02.931 10:02:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:02.931 ************************************ 00:08:02.931 10:02:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:02.931 10:02:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.931 10:02:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.931 10:02:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:02.931 ************************************ 00:08:02.931 START TEST dd_flags_misc_forced_aio 00:08:02.931 ************************************ 00:08:02.931 10:02:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:08:02.931 10:02:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:02.931 10:02:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:02.931 10:02:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:02.931 10:02:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:02.931 10:02:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:02.931 10:02:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:02.931 10:02:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:02.931 10:02:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:02.931 10:02:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:02.931 [2024-11-19 10:02:16.728097] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:02.931 [2024-11-19 10:02:16.728217] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60475 ] 00:08:03.191 [2024-11-19 10:02:16.869003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.191 [2024-11-19 10:02:16.931098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.191 [2024-11-19 10:02:16.990200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.191  [2024-11-19T10:02:17.339Z] Copying: 512/512 [B] (average 500 kBps) 00:08:03.450 00:08:03.451 10:02:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6ytbhgpprktgxx0jpkvvz6dgo6tc88fgxox0khxpql5lqy769mwf1wsqy15nqiluqeqprza9igzkz8d46ruwxhh8gxwm27j5plhhl38mmv8m2j6kbk0thkknnp5uu686nrw5jobde3kdk5d2e3t2ubwuceo4b7b7e8kdgi3eavupr1cswavm7gq5nxw5bgsxcn4eq4to7ymc0o953kk9xmunzfjlsd9kn7dodw3g00t6p4ktyfhbffbyr2lk7s7shma6wfq56blguqom193408dme24moqkllsseranjo0bemeocioqq5df087qolcqq0cehvqzc8yrypzmtididc1npz557di84z7tnjdmurlyzm950v7dvy44yke2saoix2aizz8wbkog030jpnpe1qvn6moeen3ru7nsobvnwzt2qndz4ryki9f0oarkmsvidxx8z0928es7s5iwuz673agfog0igpbw71z6oc34ty0gc4khhctn9kxl2p6jbyuvi == \6\y\t\b\h\g\p\p\r\k\t\g\x\x\0\j\p\k\v\v\z\6\d\g\o\6\t\c\8\8\f\g\x\o\x\0\k\h\x\p\q\l\5\l\q\y\7\6\9\m\w\f\1\w\s\q\y\1\5\n\q\i\l\u\q\e\q\p\r\z\a\9\i\g\z\k\z\8\d\4\6\r\u\w\x\h\h\8\g\x\w\m\2\7\j\5\p\l\h\h\l\3\8\m\m\v\8\m\2\j\6\k\b\k\0\t\h\k\k\n\n\p\5\u\u\6\8\6\n\r\w\5\j\o\b\d\e\3\k\d\k\5\d\2\e\3\t\2\u\b\w\u\c\e\o\4\b\7\b\7\e\8\k\d\g\i\3\e\a\v\u\p\r\1\c\s\w\a\v\m\7\g\q\5\n\x\w\5\b\g\s\x\c\n\4\e\q\4\t\o\7\y\m\c\0\o\9\5\3\k\k\9\x\m\u\n\z\f\j\l\s\d\9\k\n\7\d\o\d\w\3\g\0\0\t\6\p\4\k\t\y\f\h\b\f\f\b\y\r\2\l\k\7\s\7\s\h\m\a\6\w\f\q\5\6\b\l\g\u\q\o\m\1\9\3\4\0\8\d\m\e\2\4\m\o\q\k\l\l\s\s\e\r\a\n\j\o\0\b\e\m\e\o\c\i\o\q\q\5\d\f\0\8\7\q\o\l\c\q\q\0\c\e\h\v\q\z\c\8\y\r\y\p\z\m\t\i\d\i\d\c\1\n\p\z\5\5\7\d\i\8\4\z\7\t\n\j\d\m\u\r\l\y\z\m\9\5\0\v\7\d\v\y\4\4\y\k\e\2\s\a\o\i\x\2\a\i\z\z\8\w\b\k\o\g\0\3\0\j\p\n\p\e\1\q\v\n\6\m\o\e\e\n\3\r\u\7\n\s\o\b\v\n\w\z\t\2\q\n\d\z\4\r\y\k\i\9\f\0\o\a\r\k\m\s\v\i\d\x\x\8\z\0\9\2\8\e\s\7\s\5\i\w\u\z\6\7\3\a\g\f\o\g\0\i\g\p\b\w\7\1\z\6\o\c\3\4\t\y\0\g\c\4\k\h\h\c\t\n\9\k\x\l\2\p\6\j\b\y\u\v\i ]] 00:08:03.451 10:02:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:03.451 10:02:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:03.451 [2024-11-19 10:02:17.307113] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:03.451 [2024-11-19 10:02:17.307259] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60488 ] 00:08:03.709 [2024-11-19 10:02:17.451457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.709 [2024-11-19 10:02:17.500674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.709 [2024-11-19 10:02:17.558676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.968  [2024-11-19T10:02:17.857Z] Copying: 512/512 [B] (average 500 kBps) 00:08:03.968 00:08:03.968 10:02:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6ytbhgpprktgxx0jpkvvz6dgo6tc88fgxox0khxpql5lqy769mwf1wsqy15nqiluqeqprza9igzkz8d46ruwxhh8gxwm27j5plhhl38mmv8m2j6kbk0thkknnp5uu686nrw5jobde3kdk5d2e3t2ubwuceo4b7b7e8kdgi3eavupr1cswavm7gq5nxw5bgsxcn4eq4to7ymc0o953kk9xmunzfjlsd9kn7dodw3g00t6p4ktyfhbffbyr2lk7s7shma6wfq56blguqom193408dme24moqkllsseranjo0bemeocioqq5df087qolcqq0cehvqzc8yrypzmtididc1npz557di84z7tnjdmurlyzm950v7dvy44yke2saoix2aizz8wbkog030jpnpe1qvn6moeen3ru7nsobvnwzt2qndz4ryki9f0oarkmsvidxx8z0928es7s5iwuz673agfog0igpbw71z6oc34ty0gc4khhctn9kxl2p6jbyuvi == \6\y\t\b\h\g\p\p\r\k\t\g\x\x\0\j\p\k\v\v\z\6\d\g\o\6\t\c\8\8\f\g\x\o\x\0\k\h\x\p\q\l\5\l\q\y\7\6\9\m\w\f\1\w\s\q\y\1\5\n\q\i\l\u\q\e\q\p\r\z\a\9\i\g\z\k\z\8\d\4\6\r\u\w\x\h\h\8\g\x\w\m\2\7\j\5\p\l\h\h\l\3\8\m\m\v\8\m\2\j\6\k\b\k\0\t\h\k\k\n\n\p\5\u\u\6\8\6\n\r\w\5\j\o\b\d\e\3\k\d\k\5\d\2\e\3\t\2\u\b\w\u\c\e\o\4\b\7\b\7\e\8\k\d\g\i\3\e\a\v\u\p\r\1\c\s\w\a\v\m\7\g\q\5\n\x\w\5\b\g\s\x\c\n\4\e\q\4\t\o\7\y\m\c\0\o\9\5\3\k\k\9\x\m\u\n\z\f\j\l\s\d\9\k\n\7\d\o\d\w\3\g\0\0\t\6\p\4\k\t\y\f\h\b\f\f\b\y\r\2\l\k\7\s\7\s\h\m\a\6\w\f\q\5\6\b\l\g\u\q\o\m\1\9\3\4\0\8\d\m\e\2\4\m\o\q\k\l\l\s\s\e\r\a\n\j\o\0\b\e\m\e\o\c\i\o\q\q\5\d\f\0\8\7\q\o\l\c\q\q\0\c\e\h\v\q\z\c\8\y\r\y\p\z\m\t\i\d\i\d\c\1\n\p\z\5\5\7\d\i\8\4\z\7\t\n\j\d\m\u\r\l\y\z\m\9\5\0\v\7\d\v\y\4\4\y\k\e\2\s\a\o\i\x\2\a\i\z\z\8\w\b\k\o\g\0\3\0\j\p\n\p\e\1\q\v\n\6\m\o\e\e\n\3\r\u\7\n\s\o\b\v\n\w\z\t\2\q\n\d\z\4\r\y\k\i\9\f\0\o\a\r\k\m\s\v\i\d\x\x\8\z\0\9\2\8\e\s\7\s\5\i\w\u\z\6\7\3\a\g\f\o\g\0\i\g\p\b\w\7\1\z\6\o\c\3\4\t\y\0\g\c\4\k\h\h\c\t\n\9\k\x\l\2\p\6\j\b\y\u\v\i ]] 00:08:03.968 10:02:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:03.968 10:02:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:04.227 [2024-11-19 10:02:17.867082] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:04.227 [2024-11-19 10:02:17.867175] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60490 ] 00:08:04.227 [2024-11-19 10:02:18.014339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.227 [2024-11-19 10:02:18.072726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.488 [2024-11-19 10:02:18.132083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.488  [2024-11-19T10:02:18.636Z] Copying: 512/512 [B] (average 500 kBps) 00:08:04.747 00:08:04.747 10:02:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6ytbhgpprktgxx0jpkvvz6dgo6tc88fgxox0khxpql5lqy769mwf1wsqy15nqiluqeqprza9igzkz8d46ruwxhh8gxwm27j5plhhl38mmv8m2j6kbk0thkknnp5uu686nrw5jobde3kdk5d2e3t2ubwuceo4b7b7e8kdgi3eavupr1cswavm7gq5nxw5bgsxcn4eq4to7ymc0o953kk9xmunzfjlsd9kn7dodw3g00t6p4ktyfhbffbyr2lk7s7shma6wfq56blguqom193408dme24moqkllsseranjo0bemeocioqq5df087qolcqq0cehvqzc8yrypzmtididc1npz557di84z7tnjdmurlyzm950v7dvy44yke2saoix2aizz8wbkog030jpnpe1qvn6moeen3ru7nsobvnwzt2qndz4ryki9f0oarkmsvidxx8z0928es7s5iwuz673agfog0igpbw71z6oc34ty0gc4khhctn9kxl2p6jbyuvi == \6\y\t\b\h\g\p\p\r\k\t\g\x\x\0\j\p\k\v\v\z\6\d\g\o\6\t\c\8\8\f\g\x\o\x\0\k\h\x\p\q\l\5\l\q\y\7\6\9\m\w\f\1\w\s\q\y\1\5\n\q\i\l\u\q\e\q\p\r\z\a\9\i\g\z\k\z\8\d\4\6\r\u\w\x\h\h\8\g\x\w\m\2\7\j\5\p\l\h\h\l\3\8\m\m\v\8\m\2\j\6\k\b\k\0\t\h\k\k\n\n\p\5\u\u\6\8\6\n\r\w\5\j\o\b\d\e\3\k\d\k\5\d\2\e\3\t\2\u\b\w\u\c\e\o\4\b\7\b\7\e\8\k\d\g\i\3\e\a\v\u\p\r\1\c\s\w\a\v\m\7\g\q\5\n\x\w\5\b\g\s\x\c\n\4\e\q\4\t\o\7\y\m\c\0\o\9\5\3\k\k\9\x\m\u\n\z\f\j\l\s\d\9\k\n\7\d\o\d\w\3\g\0\0\t\6\p\4\k\t\y\f\h\b\f\f\b\y\r\2\l\k\7\s\7\s\h\m\a\6\w\f\q\5\6\b\l\g\u\q\o\m\1\9\3\4\0\8\d\m\e\2\4\m\o\q\k\l\l\s\s\e\r\a\n\j\o\0\b\e\m\e\o\c\i\o\q\q\5\d\f\0\8\7\q\o\l\c\q\q\0\c\e\h\v\q\z\c\8\y\r\y\p\z\m\t\i\d\i\d\c\1\n\p\z\5\5\7\d\i\8\4\z\7\t\n\j\d\m\u\r\l\y\z\m\9\5\0\v\7\d\v\y\4\4\y\k\e\2\s\a\o\i\x\2\a\i\z\z\8\w\b\k\o\g\0\3\0\j\p\n\p\e\1\q\v\n\6\m\o\e\e\n\3\r\u\7\n\s\o\b\v\n\w\z\t\2\q\n\d\z\4\r\y\k\i\9\f\0\o\a\r\k\m\s\v\i\d\x\x\8\z\0\9\2\8\e\s\7\s\5\i\w\u\z\6\7\3\a\g\f\o\g\0\i\g\p\b\w\7\1\z\6\o\c\3\4\t\y\0\g\c\4\k\h\h\c\t\n\9\k\x\l\2\p\6\j\b\y\u\v\i ]] 00:08:04.747 10:02:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:04.747 10:02:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:04.747 [2024-11-19 10:02:18.458555] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:04.747 [2024-11-19 10:02:18.458704] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60503 ] 00:08:04.747 [2024-11-19 10:02:18.607831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.006 [2024-11-19 10:02:18.674255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.006 [2024-11-19 10:02:18.732427] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.006  [2024-11-19T10:02:19.154Z] Copying: 512/512 [B] (average 500 kBps) 00:08:05.265 00:08:05.265 10:02:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6ytbhgpprktgxx0jpkvvz6dgo6tc88fgxox0khxpql5lqy769mwf1wsqy15nqiluqeqprza9igzkz8d46ruwxhh8gxwm27j5plhhl38mmv8m2j6kbk0thkknnp5uu686nrw5jobde3kdk5d2e3t2ubwuceo4b7b7e8kdgi3eavupr1cswavm7gq5nxw5bgsxcn4eq4to7ymc0o953kk9xmunzfjlsd9kn7dodw3g00t6p4ktyfhbffbyr2lk7s7shma6wfq56blguqom193408dme24moqkllsseranjo0bemeocioqq5df087qolcqq0cehvqzc8yrypzmtididc1npz557di84z7tnjdmurlyzm950v7dvy44yke2saoix2aizz8wbkog030jpnpe1qvn6moeen3ru7nsobvnwzt2qndz4ryki9f0oarkmsvidxx8z0928es7s5iwuz673agfog0igpbw71z6oc34ty0gc4khhctn9kxl2p6jbyuvi == \6\y\t\b\h\g\p\p\r\k\t\g\x\x\0\j\p\k\v\v\z\6\d\g\o\6\t\c\8\8\f\g\x\o\x\0\k\h\x\p\q\l\5\l\q\y\7\6\9\m\w\f\1\w\s\q\y\1\5\n\q\i\l\u\q\e\q\p\r\z\a\9\i\g\z\k\z\8\d\4\6\r\u\w\x\h\h\8\g\x\w\m\2\7\j\5\p\l\h\h\l\3\8\m\m\v\8\m\2\j\6\k\b\k\0\t\h\k\k\n\n\p\5\u\u\6\8\6\n\r\w\5\j\o\b\d\e\3\k\d\k\5\d\2\e\3\t\2\u\b\w\u\c\e\o\4\b\7\b\7\e\8\k\d\g\i\3\e\a\v\u\p\r\1\c\s\w\a\v\m\7\g\q\5\n\x\w\5\b\g\s\x\c\n\4\e\q\4\t\o\7\y\m\c\0\o\9\5\3\k\k\9\x\m\u\n\z\f\j\l\s\d\9\k\n\7\d\o\d\w\3\g\0\0\t\6\p\4\k\t\y\f\h\b\f\f\b\y\r\2\l\k\7\s\7\s\h\m\a\6\w\f\q\5\6\b\l\g\u\q\o\m\1\9\3\4\0\8\d\m\e\2\4\m\o\q\k\l\l\s\s\e\r\a\n\j\o\0\b\e\m\e\o\c\i\o\q\q\5\d\f\0\8\7\q\o\l\c\q\q\0\c\e\h\v\q\z\c\8\y\r\y\p\z\m\t\i\d\i\d\c\1\n\p\z\5\5\7\d\i\8\4\z\7\t\n\j\d\m\u\r\l\y\z\m\9\5\0\v\7\d\v\y\4\4\y\k\e\2\s\a\o\i\x\2\a\i\z\z\8\w\b\k\o\g\0\3\0\j\p\n\p\e\1\q\v\n\6\m\o\e\e\n\3\r\u\7\n\s\o\b\v\n\w\z\t\2\q\n\d\z\4\r\y\k\i\9\f\0\o\a\r\k\m\s\v\i\d\x\x\8\z\0\9\2\8\e\s\7\s\5\i\w\u\z\6\7\3\a\g\f\o\g\0\i\g\p\b\w\7\1\z\6\o\c\3\4\t\y\0\g\c\4\k\h\h\c\t\n\9\k\x\l\2\p\6\j\b\y\u\v\i ]] 00:08:05.265 10:02:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:05.265 10:02:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:05.265 10:02:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:05.265 10:02:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:05.265 10:02:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:05.265 10:02:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:05.265 [2024-11-19 10:02:19.049346] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:05.265 [2024-11-19 10:02:19.049444] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60511 ] 00:08:05.525 [2024-11-19 10:02:19.195709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.525 [2024-11-19 10:02:19.245892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.525 [2024-11-19 10:02:19.299253] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.525  [2024-11-19T10:02:19.674Z] Copying: 512/512 [B] (average 500 kBps) 00:08:05.785 00:08:05.785 10:02:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 23mbmfv5vl9m999gyy4cgt5a4mzs3g2rkzxbanf2zoviwcu5ptmr8f3qzyi44dw7wp1i96beiv2m1vhx81z3e6iguf9lanenyeqbog46hwtj1qdcy55rpm8kw53vamrrrtxk9oi89ilw3yef0q0391v8o3k7e2hsuxm1q3hhggditfsutb9c7osgy4okz0xs8agc78dnizt9mehufyln9dzqws72wicv7rhx0ih4ob48d8zxxjlrju3rpq77kxclto0r3c57cf92jy5bhm6dm2hqgt4lyncyyf2v7dshfwcvyfpgf4m9qq9rkmngcq4avnk28tpovtvgkz36xp5idgqdokteoy7aur9mpczhtuel8srp2rbvs54yyb7q2j7w4u02qbfwy6txycenjp0yecvgu778xa6jbv8djf1md21ymh7cgcvnry4hmxu4dxqeg58nk0sc8suxjkvgti7lanpm5ktqhogfdfmdpf490nwhf8tch4stkto3wcmisf6d == \2\3\m\b\m\f\v\5\v\l\9\m\9\9\9\g\y\y\4\c\g\t\5\a\4\m\z\s\3\g\2\r\k\z\x\b\a\n\f\2\z\o\v\i\w\c\u\5\p\t\m\r\8\f\3\q\z\y\i\4\4\d\w\7\w\p\1\i\9\6\b\e\i\v\2\m\1\v\h\x\8\1\z\3\e\6\i\g\u\f\9\l\a\n\e\n\y\e\q\b\o\g\4\6\h\w\t\j\1\q\d\c\y\5\5\r\p\m\8\k\w\5\3\v\a\m\r\r\r\t\x\k\9\o\i\8\9\i\l\w\3\y\e\f\0\q\0\3\9\1\v\8\o\3\k\7\e\2\h\s\u\x\m\1\q\3\h\h\g\g\d\i\t\f\s\u\t\b\9\c\7\o\s\g\y\4\o\k\z\0\x\s\8\a\g\c\7\8\d\n\i\z\t\9\m\e\h\u\f\y\l\n\9\d\z\q\w\s\7\2\w\i\c\v\7\r\h\x\0\i\h\4\o\b\4\8\d\8\z\x\x\j\l\r\j\u\3\r\p\q\7\7\k\x\c\l\t\o\0\r\3\c\5\7\c\f\9\2\j\y\5\b\h\m\6\d\m\2\h\q\g\t\4\l\y\n\c\y\y\f\2\v\7\d\s\h\f\w\c\v\y\f\p\g\f\4\m\9\q\q\9\r\k\m\n\g\c\q\4\a\v\n\k\2\8\t\p\o\v\t\v\g\k\z\3\6\x\p\5\i\d\g\q\d\o\k\t\e\o\y\7\a\u\r\9\m\p\c\z\h\t\u\e\l\8\s\r\p\2\r\b\v\s\5\4\y\y\b\7\q\2\j\7\w\4\u\0\2\q\b\f\w\y\6\t\x\y\c\e\n\j\p\0\y\e\c\v\g\u\7\7\8\x\a\6\j\b\v\8\d\j\f\1\m\d\2\1\y\m\h\7\c\g\c\v\n\r\y\4\h\m\x\u\4\d\x\q\e\g\5\8\n\k\0\s\c\8\s\u\x\j\k\v\g\t\i\7\l\a\n\p\m\5\k\t\q\h\o\g\f\d\f\m\d\p\f\4\9\0\n\w\h\f\8\t\c\h\4\s\t\k\t\o\3\w\c\m\i\s\f\6\d ]] 00:08:05.785 10:02:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:05.785 10:02:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:05.785 [2024-11-19 10:02:19.600882] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:05.785 [2024-11-19 10:02:19.601005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60518 ] 00:08:06.044 [2024-11-19 10:02:19.744670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.044 [2024-11-19 10:02:19.784289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.044 [2024-11-19 10:02:19.838187] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.044  [2024-11-19T10:02:20.194Z] Copying: 512/512 [B] (average 500 kBps) 00:08:06.305 00:08:06.305 10:02:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 23mbmfv5vl9m999gyy4cgt5a4mzs3g2rkzxbanf2zoviwcu5ptmr8f3qzyi44dw7wp1i96beiv2m1vhx81z3e6iguf9lanenyeqbog46hwtj1qdcy55rpm8kw53vamrrrtxk9oi89ilw3yef0q0391v8o3k7e2hsuxm1q3hhggditfsutb9c7osgy4okz0xs8agc78dnizt9mehufyln9dzqws72wicv7rhx0ih4ob48d8zxxjlrju3rpq77kxclto0r3c57cf92jy5bhm6dm2hqgt4lyncyyf2v7dshfwcvyfpgf4m9qq9rkmngcq4avnk28tpovtvgkz36xp5idgqdokteoy7aur9mpczhtuel8srp2rbvs54yyb7q2j7w4u02qbfwy6txycenjp0yecvgu778xa6jbv8djf1md21ymh7cgcvnry4hmxu4dxqeg58nk0sc8suxjkvgti7lanpm5ktqhogfdfmdpf490nwhf8tch4stkto3wcmisf6d == \2\3\m\b\m\f\v\5\v\l\9\m\9\9\9\g\y\y\4\c\g\t\5\a\4\m\z\s\3\g\2\r\k\z\x\b\a\n\f\2\z\o\v\i\w\c\u\5\p\t\m\r\8\f\3\q\z\y\i\4\4\d\w\7\w\p\1\i\9\6\b\e\i\v\2\m\1\v\h\x\8\1\z\3\e\6\i\g\u\f\9\l\a\n\e\n\y\e\q\b\o\g\4\6\h\w\t\j\1\q\d\c\y\5\5\r\p\m\8\k\w\5\3\v\a\m\r\r\r\t\x\k\9\o\i\8\9\i\l\w\3\y\e\f\0\q\0\3\9\1\v\8\o\3\k\7\e\2\h\s\u\x\m\1\q\3\h\h\g\g\d\i\t\f\s\u\t\b\9\c\7\o\s\g\y\4\o\k\z\0\x\s\8\a\g\c\7\8\d\n\i\z\t\9\m\e\h\u\f\y\l\n\9\d\z\q\w\s\7\2\w\i\c\v\7\r\h\x\0\i\h\4\o\b\4\8\d\8\z\x\x\j\l\r\j\u\3\r\p\q\7\7\k\x\c\l\t\o\0\r\3\c\5\7\c\f\9\2\j\y\5\b\h\m\6\d\m\2\h\q\g\t\4\l\y\n\c\y\y\f\2\v\7\d\s\h\f\w\c\v\y\f\p\g\f\4\m\9\q\q\9\r\k\m\n\g\c\q\4\a\v\n\k\2\8\t\p\o\v\t\v\g\k\z\3\6\x\p\5\i\d\g\q\d\o\k\t\e\o\y\7\a\u\r\9\m\p\c\z\h\t\u\e\l\8\s\r\p\2\r\b\v\s\5\4\y\y\b\7\q\2\j\7\w\4\u\0\2\q\b\f\w\y\6\t\x\y\c\e\n\j\p\0\y\e\c\v\g\u\7\7\8\x\a\6\j\b\v\8\d\j\f\1\m\d\2\1\y\m\h\7\c\g\c\v\n\r\y\4\h\m\x\u\4\d\x\q\e\g\5\8\n\k\0\s\c\8\s\u\x\j\k\v\g\t\i\7\l\a\n\p\m\5\k\t\q\h\o\g\f\d\f\m\d\p\f\4\9\0\n\w\h\f\8\t\c\h\4\s\t\k\t\o\3\w\c\m\i\s\f\6\d ]] 00:08:06.305 10:02:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:06.305 10:02:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:06.305 [2024-11-19 10:02:20.167224] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:06.305 [2024-11-19 10:02:20.167346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60526 ] 00:08:06.564 [2024-11-19 10:02:20.314850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.564 [2024-11-19 10:02:20.375874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.564 [2024-11-19 10:02:20.436884] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.868  [2024-11-19T10:02:20.757Z] Copying: 512/512 [B] (average 250 kBps) 00:08:06.868 00:08:06.869 10:02:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 23mbmfv5vl9m999gyy4cgt5a4mzs3g2rkzxbanf2zoviwcu5ptmr8f3qzyi44dw7wp1i96beiv2m1vhx81z3e6iguf9lanenyeqbog46hwtj1qdcy55rpm8kw53vamrrrtxk9oi89ilw3yef0q0391v8o3k7e2hsuxm1q3hhggditfsutb9c7osgy4okz0xs8agc78dnizt9mehufyln9dzqws72wicv7rhx0ih4ob48d8zxxjlrju3rpq77kxclto0r3c57cf92jy5bhm6dm2hqgt4lyncyyf2v7dshfwcvyfpgf4m9qq9rkmngcq4avnk28tpovtvgkz36xp5idgqdokteoy7aur9mpczhtuel8srp2rbvs54yyb7q2j7w4u02qbfwy6txycenjp0yecvgu778xa6jbv8djf1md21ymh7cgcvnry4hmxu4dxqeg58nk0sc8suxjkvgti7lanpm5ktqhogfdfmdpf490nwhf8tch4stkto3wcmisf6d == \2\3\m\b\m\f\v\5\v\l\9\m\9\9\9\g\y\y\4\c\g\t\5\a\4\m\z\s\3\g\2\r\k\z\x\b\a\n\f\2\z\o\v\i\w\c\u\5\p\t\m\r\8\f\3\q\z\y\i\4\4\d\w\7\w\p\1\i\9\6\b\e\i\v\2\m\1\v\h\x\8\1\z\3\e\6\i\g\u\f\9\l\a\n\e\n\y\e\q\b\o\g\4\6\h\w\t\j\1\q\d\c\y\5\5\r\p\m\8\k\w\5\3\v\a\m\r\r\r\t\x\k\9\o\i\8\9\i\l\w\3\y\e\f\0\q\0\3\9\1\v\8\o\3\k\7\e\2\h\s\u\x\m\1\q\3\h\h\g\g\d\i\t\f\s\u\t\b\9\c\7\o\s\g\y\4\o\k\z\0\x\s\8\a\g\c\7\8\d\n\i\z\t\9\m\e\h\u\f\y\l\n\9\d\z\q\w\s\7\2\w\i\c\v\7\r\h\x\0\i\h\4\o\b\4\8\d\8\z\x\x\j\l\r\j\u\3\r\p\q\7\7\k\x\c\l\t\o\0\r\3\c\5\7\c\f\9\2\j\y\5\b\h\m\6\d\m\2\h\q\g\t\4\l\y\n\c\y\y\f\2\v\7\d\s\h\f\w\c\v\y\f\p\g\f\4\m\9\q\q\9\r\k\m\n\g\c\q\4\a\v\n\k\2\8\t\p\o\v\t\v\g\k\z\3\6\x\p\5\i\d\g\q\d\o\k\t\e\o\y\7\a\u\r\9\m\p\c\z\h\t\u\e\l\8\s\r\p\2\r\b\v\s\5\4\y\y\b\7\q\2\j\7\w\4\u\0\2\q\b\f\w\y\6\t\x\y\c\e\n\j\p\0\y\e\c\v\g\u\7\7\8\x\a\6\j\b\v\8\d\j\f\1\m\d\2\1\y\m\h\7\c\g\c\v\n\r\y\4\h\m\x\u\4\d\x\q\e\g\5\8\n\k\0\s\c\8\s\u\x\j\k\v\g\t\i\7\l\a\n\p\m\5\k\t\q\h\o\g\f\d\f\m\d\p\f\4\9\0\n\w\h\f\8\t\c\h\4\s\t\k\t\o\3\w\c\m\i\s\f\6\d ]] 00:08:06.869 10:02:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:06.869 10:02:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:06.869 [2024-11-19 10:02:20.729620] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:06.869 [2024-11-19 10:02:20.729718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60533 ] 00:08:07.129 [2024-11-19 10:02:20.873472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.129 [2024-11-19 10:02:20.913904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.129 [2024-11-19 10:02:20.974402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.129  [2024-11-19T10:02:21.277Z] Copying: 512/512 [B] (average 250 kBps) 00:08:07.388 00:08:07.388 10:02:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 23mbmfv5vl9m999gyy4cgt5a4mzs3g2rkzxbanf2zoviwcu5ptmr8f3qzyi44dw7wp1i96beiv2m1vhx81z3e6iguf9lanenyeqbog46hwtj1qdcy55rpm8kw53vamrrrtxk9oi89ilw3yef0q0391v8o3k7e2hsuxm1q3hhggditfsutb9c7osgy4okz0xs8agc78dnizt9mehufyln9dzqws72wicv7rhx0ih4ob48d8zxxjlrju3rpq77kxclto0r3c57cf92jy5bhm6dm2hqgt4lyncyyf2v7dshfwcvyfpgf4m9qq9rkmngcq4avnk28tpovtvgkz36xp5idgqdokteoy7aur9mpczhtuel8srp2rbvs54yyb7q2j7w4u02qbfwy6txycenjp0yecvgu778xa6jbv8djf1md21ymh7cgcvnry4hmxu4dxqeg58nk0sc8suxjkvgti7lanpm5ktqhogfdfmdpf490nwhf8tch4stkto3wcmisf6d == \2\3\m\b\m\f\v\5\v\l\9\m\9\9\9\g\y\y\4\c\g\t\5\a\4\m\z\s\3\g\2\r\k\z\x\b\a\n\f\2\z\o\v\i\w\c\u\5\p\t\m\r\8\f\3\q\z\y\i\4\4\d\w\7\w\p\1\i\9\6\b\e\i\v\2\m\1\v\h\x\8\1\z\3\e\6\i\g\u\f\9\l\a\n\e\n\y\e\q\b\o\g\4\6\h\w\t\j\1\q\d\c\y\5\5\r\p\m\8\k\w\5\3\v\a\m\r\r\r\t\x\k\9\o\i\8\9\i\l\w\3\y\e\f\0\q\0\3\9\1\v\8\o\3\k\7\e\2\h\s\u\x\m\1\q\3\h\h\g\g\d\i\t\f\s\u\t\b\9\c\7\o\s\g\y\4\o\k\z\0\x\s\8\a\g\c\7\8\d\n\i\z\t\9\m\e\h\u\f\y\l\n\9\d\z\q\w\s\7\2\w\i\c\v\7\r\h\x\0\i\h\4\o\b\4\8\d\8\z\x\x\j\l\r\j\u\3\r\p\q\7\7\k\x\c\l\t\o\0\r\3\c\5\7\c\f\9\2\j\y\5\b\h\m\6\d\m\2\h\q\g\t\4\l\y\n\c\y\y\f\2\v\7\d\s\h\f\w\c\v\y\f\p\g\f\4\m\9\q\q\9\r\k\m\n\g\c\q\4\a\v\n\k\2\8\t\p\o\v\t\v\g\k\z\3\6\x\p\5\i\d\g\q\d\o\k\t\e\o\y\7\a\u\r\9\m\p\c\z\h\t\u\e\l\8\s\r\p\2\r\b\v\s\5\4\y\y\b\7\q\2\j\7\w\4\u\0\2\q\b\f\w\y\6\t\x\y\c\e\n\j\p\0\y\e\c\v\g\u\7\7\8\x\a\6\j\b\v\8\d\j\f\1\m\d\2\1\y\m\h\7\c\g\c\v\n\r\y\4\h\m\x\u\4\d\x\q\e\g\5\8\n\k\0\s\c\8\s\u\x\j\k\v\g\t\i\7\l\a\n\p\m\5\k\t\q\h\o\g\f\d\f\m\d\p\f\4\9\0\n\w\h\f\8\t\c\h\4\s\t\k\t\o\3\w\c\m\i\s\f\6\d ]] 00:08:07.389 00:08:07.389 real 0m4.565s 00:08:07.389 user 0m2.424s 00:08:07.389 sys 0m1.164s 00:08:07.389 10:02:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.389 ************************************ 00:08:07.389 END TEST dd_flags_misc_forced_aio 00:08:07.389 10:02:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:07.389 ************************************ 00:08:07.649 10:02:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:07.649 10:02:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:07.649 10:02:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:07.649 00:08:07.649 real 0m20.883s 00:08:07.649 user 0m10.101s 00:08:07.649 sys 0m6.763s 00:08:07.649 10:02:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.649 ************************************ 00:08:07.649 END TEST spdk_dd_posix 00:08:07.649 10:02:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:07.649 ************************************ 00:08:07.649 10:02:21 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:07.649 10:02:21 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.649 10:02:21 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.649 10:02:21 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:07.649 ************************************ 00:08:07.649 START TEST spdk_dd_malloc 00:08:07.649 ************************************ 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:07.649 * Looking for test storage... 00:08:07.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:07.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.649 --rc genhtml_branch_coverage=1 00:08:07.649 --rc genhtml_function_coverage=1 00:08:07.649 --rc genhtml_legend=1 00:08:07.649 --rc geninfo_all_blocks=1 00:08:07.649 --rc geninfo_unexecuted_blocks=1 00:08:07.649 00:08:07.649 ' 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:07.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.649 --rc genhtml_branch_coverage=1 00:08:07.649 --rc genhtml_function_coverage=1 00:08:07.649 --rc genhtml_legend=1 00:08:07.649 --rc geninfo_all_blocks=1 00:08:07.649 --rc geninfo_unexecuted_blocks=1 00:08:07.649 00:08:07.649 ' 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:07.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.649 --rc genhtml_branch_coverage=1 00:08:07.649 --rc genhtml_function_coverage=1 00:08:07.649 --rc genhtml_legend=1 00:08:07.649 --rc geninfo_all_blocks=1 00:08:07.649 --rc geninfo_unexecuted_blocks=1 00:08:07.649 00:08:07.649 ' 00:08:07.649 10:02:21 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:07.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.650 --rc genhtml_branch_coverage=1 00:08:07.650 --rc genhtml_function_coverage=1 00:08:07.650 --rc genhtml_legend=1 00:08:07.650 --rc geninfo_all_blocks=1 00:08:07.650 --rc geninfo_unexecuted_blocks=1 00:08:07.650 00:08:07.650 ' 00:08:07.650 10:02:21 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:07.650 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.650 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.650 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.650 10:02:21 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.650 10:02:21 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.650 10:02:21 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.650 10:02:21 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.650 10:02:21 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:07.650 10:02:21 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.650 10:02:21 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:07.650 10:02:21 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.650 10:02:21 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.650 10:02:21 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:07.909 ************************************ 00:08:07.909 START TEST dd_malloc_copy 00:08:07.909 ************************************ 00:08:07.910 10:02:21 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:08:07.910 10:02:21 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:07.910 10:02:21 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:07.910 10:02:21 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:07.910 10:02:21 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:07.910 10:02:21 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:07.910 10:02:21 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:07.910 10:02:21 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:07.910 10:02:21 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:07.910 10:02:21 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:07.910 10:02:21 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:07.910 [2024-11-19 10:02:21.604803] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:07.910 [2024-11-19 10:02:21.604956] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60615 ] 00:08:07.910 { 00:08:07.910 "subsystems": [ 00:08:07.910 { 00:08:07.910 "subsystem": "bdev", 00:08:07.910 "config": [ 00:08:07.910 { 00:08:07.910 "params": { 00:08:07.910 "block_size": 512, 00:08:07.910 "num_blocks": 1048576, 00:08:07.910 "name": "malloc0" 00:08:07.910 }, 00:08:07.910 "method": "bdev_malloc_create" 00:08:07.910 }, 00:08:07.910 { 00:08:07.910 "params": { 00:08:07.910 "block_size": 512, 00:08:07.910 "num_blocks": 1048576, 00:08:07.910 "name": "malloc1" 00:08:07.910 }, 00:08:07.910 "method": "bdev_malloc_create" 00:08:07.910 }, 00:08:07.910 { 00:08:07.910 "method": "bdev_wait_for_examine" 00:08:07.910 } 00:08:07.910 ] 00:08:07.910 } 00:08:07.910 ] 00:08:07.910 } 00:08:07.910 [2024-11-19 10:02:21.751215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.169 [2024-11-19 10:02:21.808456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.169 [2024-11-19 10:02:21.866568] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.546  [2024-11-19T10:02:24.384Z] Copying: 200/512 [MB] (200 MBps) [2024-11-19T10:02:24.974Z] Copying: 394/512 [MB] (194 MBps) [2024-11-19T10:02:25.542Z] Copying: 512/512 [MB] (average 198 MBps) 00:08:11.653 00:08:11.653 10:02:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:11.653 10:02:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:08:11.653 10:02:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:11.653 10:02:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:11.653 [2024-11-19 10:02:25.421492] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:11.653 [2024-11-19 10:02:25.421581] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60663 ] 00:08:11.653 { 00:08:11.653 "subsystems": [ 00:08:11.653 { 00:08:11.653 "subsystem": "bdev", 00:08:11.653 "config": [ 00:08:11.653 { 00:08:11.653 "params": { 00:08:11.653 "block_size": 512, 00:08:11.653 "num_blocks": 1048576, 00:08:11.653 "name": "malloc0" 00:08:11.653 }, 00:08:11.653 "method": "bdev_malloc_create" 00:08:11.653 }, 00:08:11.653 { 00:08:11.653 "params": { 00:08:11.653 "block_size": 512, 00:08:11.653 "num_blocks": 1048576, 00:08:11.653 "name": "malloc1" 00:08:11.653 }, 00:08:11.653 "method": "bdev_malloc_create" 00:08:11.653 }, 00:08:11.653 { 00:08:11.653 "method": "bdev_wait_for_examine" 00:08:11.653 } 00:08:11.653 ] 00:08:11.653 } 00:08:11.653 ] 00:08:11.653 } 00:08:11.912 [2024-11-19 10:02:25.566557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.912 [2024-11-19 10:02:25.612886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.912 [2024-11-19 10:02:25.664714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.288  [2024-11-19T10:02:28.111Z] Copying: 225/512 [MB] (225 MBps) [2024-11-19T10:02:28.678Z] Copying: 430/512 [MB] (205 MBps) [2024-11-19T10:02:29.245Z] Copying: 512/512 [MB] (average 212 MBps) 00:08:15.356 00:08:15.356 00:08:15.356 real 0m7.463s 00:08:15.356 user 0m6.450s 00:08:15.356 sys 0m0.859s 00:08:15.356 10:02:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.356 ************************************ 00:08:15.356 END TEST dd_malloc_copy 00:08:15.356 10:02:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:15.356 ************************************ 00:08:15.356 00:08:15.356 real 0m7.715s 00:08:15.356 user 0m6.590s 00:08:15.356 sys 0m0.977s 00:08:15.356 10:02:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.356 10:02:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:15.356 ************************************ 00:08:15.356 END TEST spdk_dd_malloc 00:08:15.356 ************************************ 00:08:15.356 10:02:29 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:15.356 10:02:29 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:15.356 10:02:29 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.356 10:02:29 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:15.356 ************************************ 00:08:15.356 START TEST spdk_dd_bdev_to_bdev 00:08:15.356 ************************************ 00:08:15.356 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:15.356 * Looking for test storage... 00:08:15.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:15.356 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:15.356 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:08:15.356 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:08:15.616 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:15.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.617 --rc genhtml_branch_coverage=1 00:08:15.617 --rc genhtml_function_coverage=1 00:08:15.617 --rc genhtml_legend=1 00:08:15.617 --rc geninfo_all_blocks=1 00:08:15.617 --rc geninfo_unexecuted_blocks=1 00:08:15.617 00:08:15.617 ' 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:15.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.617 --rc genhtml_branch_coverage=1 00:08:15.617 --rc genhtml_function_coverage=1 00:08:15.617 --rc genhtml_legend=1 00:08:15.617 --rc geninfo_all_blocks=1 00:08:15.617 --rc geninfo_unexecuted_blocks=1 00:08:15.617 00:08:15.617 ' 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:15.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.617 --rc genhtml_branch_coverage=1 00:08:15.617 --rc genhtml_function_coverage=1 00:08:15.617 --rc genhtml_legend=1 00:08:15.617 --rc geninfo_all_blocks=1 00:08:15.617 --rc geninfo_unexecuted_blocks=1 00:08:15.617 00:08:15.617 ' 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:15.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.617 --rc genhtml_branch_coverage=1 00:08:15.617 --rc genhtml_function_coverage=1 00:08:15.617 --rc genhtml_legend=1 00:08:15.617 --rc geninfo_all_blocks=1 00:08:15.617 --rc geninfo_unexecuted_blocks=1 00:08:15.617 00:08:15.617 ' 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:15.617 ************************************ 00:08:15.617 START TEST dd_inflate_file 00:08:15.617 ************************************ 00:08:15.617 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:15.617 [2024-11-19 10:02:29.324467] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:15.617 [2024-11-19 10:02:29.324549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60781 ] 00:08:15.617 [2024-11-19 10:02:29.468048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.876 [2024-11-19 10:02:29.520954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.876 [2024-11-19 10:02:29.576466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.876  [2024-11-19T10:02:30.024Z] Copying: 64/64 [MB] (average 1422 MBps) 00:08:16.135 00:08:16.135 00:08:16.135 real 0m0.568s 00:08:16.135 user 0m0.322s 00:08:16.135 sys 0m0.309s 00:08:16.135 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.135 ************************************ 00:08:16.135 END TEST dd_inflate_file 00:08:16.135 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:16.135 ************************************ 00:08:16.135 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:16.135 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:16.135 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:16.135 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:16.135 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:16.135 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.135 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:16.135 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:16.135 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:16.135 ************************************ 00:08:16.135 START TEST dd_copy_to_out_bdev 00:08:16.135 ************************************ 00:08:16.135 10:02:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:16.135 { 00:08:16.135 "subsystems": [ 00:08:16.135 { 00:08:16.135 "subsystem": "bdev", 00:08:16.135 "config": [ 00:08:16.135 { 00:08:16.135 "params": { 00:08:16.135 "trtype": "pcie", 00:08:16.135 "traddr": "0000:00:10.0", 00:08:16.135 "name": "Nvme0" 00:08:16.135 }, 00:08:16.135 "method": "bdev_nvme_attach_controller" 00:08:16.135 }, 00:08:16.135 { 00:08:16.135 "params": { 00:08:16.135 "trtype": "pcie", 00:08:16.135 "traddr": "0000:00:11.0", 00:08:16.135 "name": "Nvme1" 00:08:16.135 }, 00:08:16.135 "method": "bdev_nvme_attach_controller" 00:08:16.135 }, 00:08:16.135 { 00:08:16.135 "method": "bdev_wait_for_examine" 00:08:16.135 } 00:08:16.135 ] 00:08:16.135 } 00:08:16.135 ] 00:08:16.135 } 00:08:16.135 [2024-11-19 10:02:29.957062] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:16.135 [2024-11-19 10:02:29.957156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60814 ] 00:08:16.395 [2024-11-19 10:02:30.104296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.395 [2024-11-19 10:02:30.163971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.395 [2024-11-19 10:02:30.221642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.777  [2024-11-19T10:02:31.666Z] Copying: 60/64 [MB] (60 MBps) [2024-11-19T10:02:31.926Z] Copying: 64/64 [MB] (average 60 MBps) 00:08:18.037 00:08:18.037 00:08:18.037 real 0m1.795s 00:08:18.037 user 0m1.568s 00:08:18.037 sys 0m1.410s 00:08:18.037 10:02:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.037 10:02:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:18.037 ************************************ 00:08:18.037 END TEST dd_copy_to_out_bdev 00:08:18.037 ************************************ 00:08:18.037 10:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:18.037 10:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:18.037 10:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.037 10:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.037 10:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:18.037 ************************************ 00:08:18.037 START TEST dd_offset_magic 00:08:18.037 ************************************ 00:08:18.037 10:02:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:08:18.037 10:02:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:18.037 10:02:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:18.037 10:02:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:18.037 10:02:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:18.037 10:02:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:18.037 10:02:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:18.037 10:02:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:18.037 10:02:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:18.037 [2024-11-19 10:02:31.806582] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:18.037 [2024-11-19 10:02:31.806658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60854 ] 00:08:18.037 { 00:08:18.037 "subsystems": [ 00:08:18.037 { 00:08:18.037 "subsystem": "bdev", 00:08:18.037 "config": [ 00:08:18.037 { 00:08:18.037 "params": { 00:08:18.037 "trtype": "pcie", 00:08:18.037 "traddr": "0000:00:10.0", 00:08:18.037 "name": "Nvme0" 00:08:18.037 }, 00:08:18.037 "method": "bdev_nvme_attach_controller" 00:08:18.037 }, 00:08:18.037 { 00:08:18.037 "params": { 00:08:18.037 "trtype": "pcie", 00:08:18.037 "traddr": "0000:00:11.0", 00:08:18.037 "name": "Nvme1" 00:08:18.037 }, 00:08:18.037 "method": "bdev_nvme_attach_controller" 00:08:18.037 }, 00:08:18.037 { 00:08:18.037 "method": "bdev_wait_for_examine" 00:08:18.037 } 00:08:18.037 ] 00:08:18.037 } 00:08:18.037 ] 00:08:18.037 } 00:08:18.296 [2024-11-19 10:02:31.949557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.296 [2024-11-19 10:02:32.007526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.296 [2024-11-19 10:02:32.060108] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.554  [2024-11-19T10:02:32.703Z] Copying: 65/65 [MB] (average 984 MBps) 00:08:18.814 00:08:18.814 10:02:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:18.814 10:02:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:18.814 10:02:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:18.814 10:02:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:18.814 [2024-11-19 10:02:32.618497] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:18.814 [2024-11-19 10:02:32.618654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60874 ] 00:08:18.814 { 00:08:18.814 "subsystems": [ 00:08:18.814 { 00:08:18.814 "subsystem": "bdev", 00:08:18.814 "config": [ 00:08:18.814 { 00:08:18.814 "params": { 00:08:18.814 "trtype": "pcie", 00:08:18.814 "traddr": "0000:00:10.0", 00:08:18.814 "name": "Nvme0" 00:08:18.814 }, 00:08:18.814 "method": "bdev_nvme_attach_controller" 00:08:18.814 }, 00:08:18.814 { 00:08:18.814 "params": { 00:08:18.814 "trtype": "pcie", 00:08:18.814 "traddr": "0000:00:11.0", 00:08:18.814 "name": "Nvme1" 00:08:18.814 }, 00:08:18.814 "method": "bdev_nvme_attach_controller" 00:08:18.814 }, 00:08:18.814 { 00:08:18.814 "method": "bdev_wait_for_examine" 00:08:18.814 } 00:08:18.814 ] 00:08:18.814 } 00:08:18.814 ] 00:08:18.814 } 00:08:19.073 [2024-11-19 10:02:32.768525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.073 [2024-11-19 10:02:32.821981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.073 [2024-11-19 10:02:32.875362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.331  [2024-11-19T10:02:33.480Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:19.591 00:08:19.591 10:02:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:19.591 10:02:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:19.591 10:02:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:19.591 10:02:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:19.591 10:02:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:19.591 10:02:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:19.591 10:02:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:19.591 [2024-11-19 10:02:33.327527] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:19.591 [2024-11-19 10:02:33.327624] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60896 ] 00:08:19.591 { 00:08:19.591 "subsystems": [ 00:08:19.591 { 00:08:19.591 "subsystem": "bdev", 00:08:19.591 "config": [ 00:08:19.591 { 00:08:19.591 "params": { 00:08:19.591 "trtype": "pcie", 00:08:19.591 "traddr": "0000:00:10.0", 00:08:19.591 "name": "Nvme0" 00:08:19.591 }, 00:08:19.591 "method": "bdev_nvme_attach_controller" 00:08:19.591 }, 00:08:19.591 { 00:08:19.591 "params": { 00:08:19.591 "trtype": "pcie", 00:08:19.591 "traddr": "0000:00:11.0", 00:08:19.591 "name": "Nvme1" 00:08:19.591 }, 00:08:19.591 "method": "bdev_nvme_attach_controller" 00:08:19.591 }, 00:08:19.591 { 00:08:19.591 "method": "bdev_wait_for_examine" 00:08:19.591 } 00:08:19.591 ] 00:08:19.591 } 00:08:19.591 ] 00:08:19.591 } 00:08:19.591 [2024-11-19 10:02:33.476347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.850 [2024-11-19 10:02:33.530682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.850 [2024-11-19 10:02:33.584542] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.189  [2024-11-19T10:02:34.078Z] Copying: 65/65 [MB] (average 1048 MBps) 00:08:20.189 00:08:20.189 10:02:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:20.189 10:02:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:20.189 10:02:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:20.189 10:02:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:20.454 [2024-11-19 10:02:34.108469] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:20.454 [2024-11-19 10:02:34.108614] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60910 ] 00:08:20.454 { 00:08:20.454 "subsystems": [ 00:08:20.454 { 00:08:20.454 "subsystem": "bdev", 00:08:20.454 "config": [ 00:08:20.454 { 00:08:20.454 "params": { 00:08:20.454 "trtype": "pcie", 00:08:20.454 "traddr": "0000:00:10.0", 00:08:20.454 "name": "Nvme0" 00:08:20.454 }, 00:08:20.454 "method": "bdev_nvme_attach_controller" 00:08:20.454 }, 00:08:20.454 { 00:08:20.454 "params": { 00:08:20.454 "trtype": "pcie", 00:08:20.454 "traddr": "0000:00:11.0", 00:08:20.454 "name": "Nvme1" 00:08:20.454 }, 00:08:20.454 "method": "bdev_nvme_attach_controller" 00:08:20.454 }, 00:08:20.454 { 00:08:20.454 "method": "bdev_wait_for_examine" 00:08:20.454 } 00:08:20.454 ] 00:08:20.454 } 00:08:20.454 ] 00:08:20.454 } 00:08:20.454 [2024-11-19 10:02:34.254482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.454 [2024-11-19 10:02:34.332740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.712 [2024-11-19 10:02:34.389811] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.712  [2024-11-19T10:02:34.859Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:20.970 00:08:20.970 10:02:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:20.970 10:02:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:20.970 00:08:20.970 real 0m3.006s 00:08:20.970 user 0m2.231s 00:08:20.970 sys 0m0.896s 00:08:20.970 ************************************ 00:08:20.970 END TEST dd_offset_magic 00:08:20.970 ************************************ 00:08:20.970 10:02:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.970 10:02:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:20.970 10:02:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:20.970 10:02:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:20.970 10:02:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:20.970 10:02:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:20.970 10:02:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:20.970 10:02:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:20.970 10:02:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:20.970 10:02:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:20.970 10:02:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:20.970 10:02:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:20.970 10:02:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:21.229 [2024-11-19 10:02:34.861236] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:21.229 [2024-11-19 10:02:34.861544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60942 ] 00:08:21.229 { 00:08:21.229 "subsystems": [ 00:08:21.229 { 00:08:21.229 "subsystem": "bdev", 00:08:21.229 "config": [ 00:08:21.229 { 00:08:21.229 "params": { 00:08:21.229 "trtype": "pcie", 00:08:21.229 "traddr": "0000:00:10.0", 00:08:21.229 "name": "Nvme0" 00:08:21.229 }, 00:08:21.229 "method": "bdev_nvme_attach_controller" 00:08:21.229 }, 00:08:21.229 { 00:08:21.229 "params": { 00:08:21.229 "trtype": "pcie", 00:08:21.229 "traddr": "0000:00:11.0", 00:08:21.229 "name": "Nvme1" 00:08:21.229 }, 00:08:21.229 "method": "bdev_nvme_attach_controller" 00:08:21.229 }, 00:08:21.229 { 00:08:21.229 "method": "bdev_wait_for_examine" 00:08:21.229 } 00:08:21.229 ] 00:08:21.229 } 00:08:21.229 ] 00:08:21.229 } 00:08:21.229 [2024-11-19 10:02:35.009681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.229 [2024-11-19 10:02:35.093591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.488 [2024-11-19 10:02:35.155298] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.488  [2024-11-19T10:02:35.635Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:08:21.746 00:08:21.746 10:02:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:21.746 10:02:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:21.746 10:02:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:21.746 10:02:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:21.746 10:02:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:21.746 10:02:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:21.746 10:02:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:21.747 10:02:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:21.747 10:02:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:21.747 10:02:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:21.747 [2024-11-19 10:02:35.594032] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:21.747 [2024-11-19 10:02:35.594316] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60963 ] 00:08:21.747 { 00:08:21.747 "subsystems": [ 00:08:21.747 { 00:08:21.747 "subsystem": "bdev", 00:08:21.747 "config": [ 00:08:21.747 { 00:08:21.747 "params": { 00:08:21.747 "trtype": "pcie", 00:08:21.747 "traddr": "0000:00:10.0", 00:08:21.747 "name": "Nvme0" 00:08:21.747 }, 00:08:21.747 "method": "bdev_nvme_attach_controller" 00:08:21.747 }, 00:08:21.747 { 00:08:21.747 "params": { 00:08:21.747 "trtype": "pcie", 00:08:21.747 "traddr": "0000:00:11.0", 00:08:21.747 "name": "Nvme1" 00:08:21.747 }, 00:08:21.747 "method": "bdev_nvme_attach_controller" 00:08:21.747 }, 00:08:21.747 { 00:08:21.747 "method": "bdev_wait_for_examine" 00:08:21.747 } 00:08:21.747 ] 00:08:21.747 } 00:08:21.747 ] 00:08:21.747 } 00:08:22.005 [2024-11-19 10:02:35.740686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.005 [2024-11-19 10:02:35.794559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.005 [2024-11-19 10:02:35.851421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.263  [2024-11-19T10:02:36.411Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:08:22.522 00:08:22.522 10:02:36 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:22.522 ************************************ 00:08:22.522 END TEST spdk_dd_bdev_to_bdev 00:08:22.522 ************************************ 00:08:22.522 00:08:22.522 real 0m7.169s 00:08:22.522 user 0m5.273s 00:08:22.522 sys 0m3.372s 00:08:22.522 10:02:36 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.522 10:02:36 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:22.522 10:02:36 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:22.522 10:02:36 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:22.522 10:02:36 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:22.522 10:02:36 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.522 10:02:36 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:22.522 ************************************ 00:08:22.522 START TEST spdk_dd_uring 00:08:22.522 ************************************ 00:08:22.522 10:02:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:22.522 * Looking for test storage... 00:08:22.522 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:22.522 10:02:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:22.522 10:02:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:08:22.522 10:02:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:22.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.782 --rc genhtml_branch_coverage=1 00:08:22.782 --rc genhtml_function_coverage=1 00:08:22.782 --rc genhtml_legend=1 00:08:22.782 --rc geninfo_all_blocks=1 00:08:22.782 --rc geninfo_unexecuted_blocks=1 00:08:22.782 00:08:22.782 ' 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:22.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.782 --rc genhtml_branch_coverage=1 00:08:22.782 --rc genhtml_function_coverage=1 00:08:22.782 --rc genhtml_legend=1 00:08:22.782 --rc geninfo_all_blocks=1 00:08:22.782 --rc geninfo_unexecuted_blocks=1 00:08:22.782 00:08:22.782 ' 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:22.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.782 --rc genhtml_branch_coverage=1 00:08:22.782 --rc genhtml_function_coverage=1 00:08:22.782 --rc genhtml_legend=1 00:08:22.782 --rc geninfo_all_blocks=1 00:08:22.782 --rc geninfo_unexecuted_blocks=1 00:08:22.782 00:08:22.782 ' 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:22.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.782 --rc genhtml_branch_coverage=1 00:08:22.782 --rc genhtml_function_coverage=1 00:08:22.782 --rc genhtml_legend=1 00:08:22.782 --rc geninfo_all_blocks=1 00:08:22.782 --rc geninfo_unexecuted_blocks=1 00:08:22.782 00:08:22.782 ' 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:22.782 ************************************ 00:08:22.782 START TEST dd_uring_copy 00:08:22.782 ************************************ 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:22.782 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=l2kqsbnksje42o0gy6l7hxsun7y4glzcgeb7joxv4z977e4r6pffrxw2nffiz8f3rpkzrt2831fy2450btaobvkyjsj3vg7p11bpgca499ksdyay9wip3p5mc7qoai2zaskrnnc40si23sgvkmctb07qn16os9orbdbetmwcfj6znamilfohusf3zokv4uk0o3nlbq44dqp6mhlsl2t2ivuypib0zeq9prcjrqpxfiij8pr0ne0i8yl6iv6lsf2mkx052t78niiwo4hmtkaw63yr093tghl1msmbtfaq43y5emux8h13cnjd2413j7wmynueejdx2m5bga63hkqos802s8thrxnn8o3x7alrrzpu3kop84h9v8oz1yx5own6ergtcx1bm7xip1ny1q4noohnhb6sd3ezatjyyc5lwh99tholpk8o4p3xd5419pj9pxkjrmt5i7hk0hr6m5voz9rugm3hefyk77jrfxg0x3r1ps544dy3qxwx56dd84iunq8udqi3ftfcac3hfyppyl4m6u6j4ftmmxmkkg4yt1v4zzce4gnbrdmbomoi2gbfwqjrlag2b8h9r5uvpvshlo516xgl0sy8l5l04pg4li3iy7u1nb2q5bi0wrkgrwh0p396q47uzssjwlkqpojgg1i2o0u8feyve9xib8nx6jbu2n7la9yulrg5orw5httqtiq0xkia38o1lv8xxntp1hprjydlcxlyoxr53saod9l5olosl16b310ua131agtfhgg7n397venmwg074h90teroc4azxi7g75k37ruw8688w77zailhvcfx34bh9tkmhnggtmtmydkr0oz5k8kr8df6hhd5dyouoot3x61ei6glrots79yzbpwgfz6n41y553ui258mcjf9fn5wxta9jtp3ayqot2kr07v4w13wz2ae1nq67skbf4s3mi4mxpk9d4dj5xs7wvvgmv1gv6n8hozr1e2lg3lliak7w6k61yuben7ubzd3sdu9brwh7qlr 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo l2kqsbnksje42o0gy6l7hxsun7y4glzcgeb7joxv4z977e4r6pffrxw2nffiz8f3rpkzrt2831fy2450btaobvkyjsj3vg7p11bpgca499ksdyay9wip3p5mc7qoai2zaskrnnc40si23sgvkmctb07qn16os9orbdbetmwcfj6znamilfohusf3zokv4uk0o3nlbq44dqp6mhlsl2t2ivuypib0zeq9prcjrqpxfiij8pr0ne0i8yl6iv6lsf2mkx052t78niiwo4hmtkaw63yr093tghl1msmbtfaq43y5emux8h13cnjd2413j7wmynueejdx2m5bga63hkqos802s8thrxnn8o3x7alrrzpu3kop84h9v8oz1yx5own6ergtcx1bm7xip1ny1q4noohnhb6sd3ezatjyyc5lwh99tholpk8o4p3xd5419pj9pxkjrmt5i7hk0hr6m5voz9rugm3hefyk77jrfxg0x3r1ps544dy3qxwx56dd84iunq8udqi3ftfcac3hfyppyl4m6u6j4ftmmxmkkg4yt1v4zzce4gnbrdmbomoi2gbfwqjrlag2b8h9r5uvpvshlo516xgl0sy8l5l04pg4li3iy7u1nb2q5bi0wrkgrwh0p396q47uzssjwlkqpojgg1i2o0u8feyve9xib8nx6jbu2n7la9yulrg5orw5httqtiq0xkia38o1lv8xxntp1hprjydlcxlyoxr53saod9l5olosl16b310ua131agtfhgg7n397venmwg074h90teroc4azxi7g75k37ruw8688w77zailhvcfx34bh9tkmhnggtmtmydkr0oz5k8kr8df6hhd5dyouoot3x61ei6glrots79yzbpwgfz6n41y553ui258mcjf9fn5wxta9jtp3ayqot2kr07v4w13wz2ae1nq67skbf4s3mi4mxpk9d4dj5xs7wvvgmv1gv6n8hozr1e2lg3lliak7w6k61yuben7ubzd3sdu9brwh7qlr 00:08:22.783 10:02:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:22.783 [2024-11-19 10:02:36.601707] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:22.783 [2024-11-19 10:02:36.601953] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61043 ] 00:08:23.041 [2024-11-19 10:02:36.742497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.041 [2024-11-19 10:02:36.786392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.041 [2024-11-19 10:02:36.839031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.608  [2024-11-19T10:02:38.066Z] Copying: 511/511 [MB] (average 1340 MBps) 00:08:24.177 00:08:24.177 10:02:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:24.177 10:02:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:24.177 10:02:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:24.177 10:02:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:24.177 [2024-11-19 10:02:37.854842] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:24.177 [2024-11-19 10:02:37.854985] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61060 ] 00:08:24.177 { 00:08:24.177 "subsystems": [ 00:08:24.177 { 00:08:24.177 "subsystem": "bdev", 00:08:24.177 "config": [ 00:08:24.177 { 00:08:24.177 "params": { 00:08:24.177 "block_size": 512, 00:08:24.177 "num_blocks": 1048576, 00:08:24.177 "name": "malloc0" 00:08:24.177 }, 00:08:24.177 "method": "bdev_malloc_create" 00:08:24.177 }, 00:08:24.177 { 00:08:24.177 "params": { 00:08:24.177 "filename": "/dev/zram1", 00:08:24.177 "name": "uring0" 00:08:24.177 }, 00:08:24.177 "method": "bdev_uring_create" 00:08:24.177 }, 00:08:24.177 { 00:08:24.177 "method": "bdev_wait_for_examine" 00:08:24.177 } 00:08:24.177 ] 00:08:24.177 } 00:08:24.177 ] 00:08:24.177 } 00:08:24.177 [2024-11-19 10:02:38.001792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.177 [2024-11-19 10:02:38.063485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.435 [2024-11-19 10:02:38.116745] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.812  [2024-11-19T10:02:40.639Z] Copying: 244/512 [MB] (244 MBps) [2024-11-19T10:02:40.639Z] Copying: 478/512 [MB] (234 MBps) [2024-11-19T10:02:40.897Z] Copying: 512/512 [MB] (average 239 MBps) 00:08:27.008 00:08:27.008 10:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:27.008 10:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:27.008 10:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:27.008 10:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:27.008 [2024-11-19 10:02:40.897275] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:27.008 [2024-11-19 10:02:40.897624] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61104 ] 00:08:27.266 { 00:08:27.266 "subsystems": [ 00:08:27.266 { 00:08:27.266 "subsystem": "bdev", 00:08:27.266 "config": [ 00:08:27.266 { 00:08:27.266 "params": { 00:08:27.266 "block_size": 512, 00:08:27.266 "num_blocks": 1048576, 00:08:27.266 "name": "malloc0" 00:08:27.266 }, 00:08:27.266 "method": "bdev_malloc_create" 00:08:27.266 }, 00:08:27.266 { 00:08:27.266 "params": { 00:08:27.266 "filename": "/dev/zram1", 00:08:27.266 "name": "uring0" 00:08:27.266 }, 00:08:27.266 "method": "bdev_uring_create" 00:08:27.266 }, 00:08:27.266 { 00:08:27.266 "method": "bdev_wait_for_examine" 00:08:27.266 } 00:08:27.266 ] 00:08:27.266 } 00:08:27.266 ] 00:08:27.266 } 00:08:27.266 [2024-11-19 10:02:41.047071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.266 [2024-11-19 10:02:41.102124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.523 [2024-11-19 10:02:41.159730] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.900  [2024-11-19T10:02:43.725Z] Copying: 186/512 [MB] (186 MBps) [2024-11-19T10:02:44.294Z] Copying: 360/512 [MB] (173 MBps) [2024-11-19T10:02:44.862Z] Copying: 512/512 [MB] (average 177 MBps) 00:08:30.973 00:08:30.973 10:02:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:30.973 10:02:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ l2kqsbnksje42o0gy6l7hxsun7y4glzcgeb7joxv4z977e4r6pffrxw2nffiz8f3rpkzrt2831fy2450btaobvkyjsj3vg7p11bpgca499ksdyay9wip3p5mc7qoai2zaskrnnc40si23sgvkmctb07qn16os9orbdbetmwcfj6znamilfohusf3zokv4uk0o3nlbq44dqp6mhlsl2t2ivuypib0zeq9prcjrqpxfiij8pr0ne0i8yl6iv6lsf2mkx052t78niiwo4hmtkaw63yr093tghl1msmbtfaq43y5emux8h13cnjd2413j7wmynueejdx2m5bga63hkqos802s8thrxnn8o3x7alrrzpu3kop84h9v8oz1yx5own6ergtcx1bm7xip1ny1q4noohnhb6sd3ezatjyyc5lwh99tholpk8o4p3xd5419pj9pxkjrmt5i7hk0hr6m5voz9rugm3hefyk77jrfxg0x3r1ps544dy3qxwx56dd84iunq8udqi3ftfcac3hfyppyl4m6u6j4ftmmxmkkg4yt1v4zzce4gnbrdmbomoi2gbfwqjrlag2b8h9r5uvpvshlo516xgl0sy8l5l04pg4li3iy7u1nb2q5bi0wrkgrwh0p396q47uzssjwlkqpojgg1i2o0u8feyve9xib8nx6jbu2n7la9yulrg5orw5httqtiq0xkia38o1lv8xxntp1hprjydlcxlyoxr53saod9l5olosl16b310ua131agtfhgg7n397venmwg074h90teroc4azxi7g75k37ruw8688w77zailhvcfx34bh9tkmhnggtmtmydkr0oz5k8kr8df6hhd5dyouoot3x61ei6glrots79yzbpwgfz6n41y553ui258mcjf9fn5wxta9jtp3ayqot2kr07v4w13wz2ae1nq67skbf4s3mi4mxpk9d4dj5xs7wvvgmv1gv6n8hozr1e2lg3lliak7w6k61yuben7ubzd3sdu9brwh7qlr == \l\2\k\q\s\b\n\k\s\j\e\4\2\o\0\g\y\6\l\7\h\x\s\u\n\7\y\4\g\l\z\c\g\e\b\7\j\o\x\v\4\z\9\7\7\e\4\r\6\p\f\f\r\x\w\2\n\f\f\i\z\8\f\3\r\p\k\z\r\t\2\8\3\1\f\y\2\4\5\0\b\t\a\o\b\v\k\y\j\s\j\3\v\g\7\p\1\1\b\p\g\c\a\4\9\9\k\s\d\y\a\y\9\w\i\p\3\p\5\m\c\7\q\o\a\i\2\z\a\s\k\r\n\n\c\4\0\s\i\2\3\s\g\v\k\m\c\t\b\0\7\q\n\1\6\o\s\9\o\r\b\d\b\e\t\m\w\c\f\j\6\z\n\a\m\i\l\f\o\h\u\s\f\3\z\o\k\v\4\u\k\0\o\3\n\l\b\q\4\4\d\q\p\6\m\h\l\s\l\2\t\2\i\v\u\y\p\i\b\0\z\e\q\9\p\r\c\j\r\q\p\x\f\i\i\j\8\p\r\0\n\e\0\i\8\y\l\6\i\v\6\l\s\f\2\m\k\x\0\5\2\t\7\8\n\i\i\w\o\4\h\m\t\k\a\w\6\3\y\r\0\9\3\t\g\h\l\1\m\s\m\b\t\f\a\q\4\3\y\5\e\m\u\x\8\h\1\3\c\n\j\d\2\4\1\3\j\7\w\m\y\n\u\e\e\j\d\x\2\m\5\b\g\a\6\3\h\k\q\o\s\8\0\2\s\8\t\h\r\x\n\n\8\o\3\x\7\a\l\r\r\z\p\u\3\k\o\p\8\4\h\9\v\8\o\z\1\y\x\5\o\w\n\6\e\r\g\t\c\x\1\b\m\7\x\i\p\1\n\y\1\q\4\n\o\o\h\n\h\b\6\s\d\3\e\z\a\t\j\y\y\c\5\l\w\h\9\9\t\h\o\l\p\k\8\o\4\p\3\x\d\5\4\1\9\p\j\9\p\x\k\j\r\m\t\5\i\7\h\k\0\h\r\6\m\5\v\o\z\9\r\u\g\m\3\h\e\f\y\k\7\7\j\r\f\x\g\0\x\3\r\1\p\s\5\4\4\d\y\3\q\x\w\x\5\6\d\d\8\4\i\u\n\q\8\u\d\q\i\3\f\t\f\c\a\c\3\h\f\y\p\p\y\l\4\m\6\u\6\j\4\f\t\m\m\x\m\k\k\g\4\y\t\1\v\4\z\z\c\e\4\g\n\b\r\d\m\b\o\m\o\i\2\g\b\f\w\q\j\r\l\a\g\2\b\8\h\9\r\5\u\v\p\v\s\h\l\o\5\1\6\x\g\l\0\s\y\8\l\5\l\0\4\p\g\4\l\i\3\i\y\7\u\1\n\b\2\q\5\b\i\0\w\r\k\g\r\w\h\0\p\3\9\6\q\4\7\u\z\s\s\j\w\l\k\q\p\o\j\g\g\1\i\2\o\0\u\8\f\e\y\v\e\9\x\i\b\8\n\x\6\j\b\u\2\n\7\l\a\9\y\u\l\r\g\5\o\r\w\5\h\t\t\q\t\i\q\0\x\k\i\a\3\8\o\1\l\v\8\x\x\n\t\p\1\h\p\r\j\y\d\l\c\x\l\y\o\x\r\5\3\s\a\o\d\9\l\5\o\l\o\s\l\1\6\b\3\1\0\u\a\1\3\1\a\g\t\f\h\g\g\7\n\3\9\7\v\e\n\m\w\g\0\7\4\h\9\0\t\e\r\o\c\4\a\z\x\i\7\g\7\5\k\3\7\r\u\w\8\6\8\8\w\7\7\z\a\i\l\h\v\c\f\x\3\4\b\h\9\t\k\m\h\n\g\g\t\m\t\m\y\d\k\r\0\o\z\5\k\8\k\r\8\d\f\6\h\h\d\5\d\y\o\u\o\o\t\3\x\6\1\e\i\6\g\l\r\o\t\s\7\9\y\z\b\p\w\g\f\z\6\n\4\1\y\5\5\3\u\i\2\5\8\m\c\j\f\9\f\n\5\w\x\t\a\9\j\t\p\3\a\y\q\o\t\2\k\r\0\7\v\4\w\1\3\w\z\2\a\e\1\n\q\6\7\s\k\b\f\4\s\3\m\i\4\m\x\p\k\9\d\4\d\j\5\x\s\7\w\v\v\g\m\v\1\g\v\6\n\8\h\o\z\r\1\e\2\l\g\3\l\l\i\a\k\7\w\6\k\6\1\y\u\b\e\n\7\u\b\z\d\3\s\d\u\9\b\r\w\h\7\q\l\r ]] 00:08:30.973 10:02:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:30.973 10:02:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ l2kqsbnksje42o0gy6l7hxsun7y4glzcgeb7joxv4z977e4r6pffrxw2nffiz8f3rpkzrt2831fy2450btaobvkyjsj3vg7p11bpgca499ksdyay9wip3p5mc7qoai2zaskrnnc40si23sgvkmctb07qn16os9orbdbetmwcfj6znamilfohusf3zokv4uk0o3nlbq44dqp6mhlsl2t2ivuypib0zeq9prcjrqpxfiij8pr0ne0i8yl6iv6lsf2mkx052t78niiwo4hmtkaw63yr093tghl1msmbtfaq43y5emux8h13cnjd2413j7wmynueejdx2m5bga63hkqos802s8thrxnn8o3x7alrrzpu3kop84h9v8oz1yx5own6ergtcx1bm7xip1ny1q4noohnhb6sd3ezatjyyc5lwh99tholpk8o4p3xd5419pj9pxkjrmt5i7hk0hr6m5voz9rugm3hefyk77jrfxg0x3r1ps544dy3qxwx56dd84iunq8udqi3ftfcac3hfyppyl4m6u6j4ftmmxmkkg4yt1v4zzce4gnbrdmbomoi2gbfwqjrlag2b8h9r5uvpvshlo516xgl0sy8l5l04pg4li3iy7u1nb2q5bi0wrkgrwh0p396q47uzssjwlkqpojgg1i2o0u8feyve9xib8nx6jbu2n7la9yulrg5orw5httqtiq0xkia38o1lv8xxntp1hprjydlcxlyoxr53saod9l5olosl16b310ua131agtfhgg7n397venmwg074h90teroc4azxi7g75k37ruw8688w77zailhvcfx34bh9tkmhnggtmtmydkr0oz5k8kr8df6hhd5dyouoot3x61ei6glrots79yzbpwgfz6n41y553ui258mcjf9fn5wxta9jtp3ayqot2kr07v4w13wz2ae1nq67skbf4s3mi4mxpk9d4dj5xs7wvvgmv1gv6n8hozr1e2lg3lliak7w6k61yuben7ubzd3sdu9brwh7qlr == \l\2\k\q\s\b\n\k\s\j\e\4\2\o\0\g\y\6\l\7\h\x\s\u\n\7\y\4\g\l\z\c\g\e\b\7\j\o\x\v\4\z\9\7\7\e\4\r\6\p\f\f\r\x\w\2\n\f\f\i\z\8\f\3\r\p\k\z\r\t\2\8\3\1\f\y\2\4\5\0\b\t\a\o\b\v\k\y\j\s\j\3\v\g\7\p\1\1\b\p\g\c\a\4\9\9\k\s\d\y\a\y\9\w\i\p\3\p\5\m\c\7\q\o\a\i\2\z\a\s\k\r\n\n\c\4\0\s\i\2\3\s\g\v\k\m\c\t\b\0\7\q\n\1\6\o\s\9\o\r\b\d\b\e\t\m\w\c\f\j\6\z\n\a\m\i\l\f\o\h\u\s\f\3\z\o\k\v\4\u\k\0\o\3\n\l\b\q\4\4\d\q\p\6\m\h\l\s\l\2\t\2\i\v\u\y\p\i\b\0\z\e\q\9\p\r\c\j\r\q\p\x\f\i\i\j\8\p\r\0\n\e\0\i\8\y\l\6\i\v\6\l\s\f\2\m\k\x\0\5\2\t\7\8\n\i\i\w\o\4\h\m\t\k\a\w\6\3\y\r\0\9\3\t\g\h\l\1\m\s\m\b\t\f\a\q\4\3\y\5\e\m\u\x\8\h\1\3\c\n\j\d\2\4\1\3\j\7\w\m\y\n\u\e\e\j\d\x\2\m\5\b\g\a\6\3\h\k\q\o\s\8\0\2\s\8\t\h\r\x\n\n\8\o\3\x\7\a\l\r\r\z\p\u\3\k\o\p\8\4\h\9\v\8\o\z\1\y\x\5\o\w\n\6\e\r\g\t\c\x\1\b\m\7\x\i\p\1\n\y\1\q\4\n\o\o\h\n\h\b\6\s\d\3\e\z\a\t\j\y\y\c\5\l\w\h\9\9\t\h\o\l\p\k\8\o\4\p\3\x\d\5\4\1\9\p\j\9\p\x\k\j\r\m\t\5\i\7\h\k\0\h\r\6\m\5\v\o\z\9\r\u\g\m\3\h\e\f\y\k\7\7\j\r\f\x\g\0\x\3\r\1\p\s\5\4\4\d\y\3\q\x\w\x\5\6\d\d\8\4\i\u\n\q\8\u\d\q\i\3\f\t\f\c\a\c\3\h\f\y\p\p\y\l\4\m\6\u\6\j\4\f\t\m\m\x\m\k\k\g\4\y\t\1\v\4\z\z\c\e\4\g\n\b\r\d\m\b\o\m\o\i\2\g\b\f\w\q\j\r\l\a\g\2\b\8\h\9\r\5\u\v\p\v\s\h\l\o\5\1\6\x\g\l\0\s\y\8\l\5\l\0\4\p\g\4\l\i\3\i\y\7\u\1\n\b\2\q\5\b\i\0\w\r\k\g\r\w\h\0\p\3\9\6\q\4\7\u\z\s\s\j\w\l\k\q\p\o\j\g\g\1\i\2\o\0\u\8\f\e\y\v\e\9\x\i\b\8\n\x\6\j\b\u\2\n\7\l\a\9\y\u\l\r\g\5\o\r\w\5\h\t\t\q\t\i\q\0\x\k\i\a\3\8\o\1\l\v\8\x\x\n\t\p\1\h\p\r\j\y\d\l\c\x\l\y\o\x\r\5\3\s\a\o\d\9\l\5\o\l\o\s\l\1\6\b\3\1\0\u\a\1\3\1\a\g\t\f\h\g\g\7\n\3\9\7\v\e\n\m\w\g\0\7\4\h\9\0\t\e\r\o\c\4\a\z\x\i\7\g\7\5\k\3\7\r\u\w\8\6\8\8\w\7\7\z\a\i\l\h\v\c\f\x\3\4\b\h\9\t\k\m\h\n\g\g\t\m\t\m\y\d\k\r\0\o\z\5\k\8\k\r\8\d\f\6\h\h\d\5\d\y\o\u\o\o\t\3\x\6\1\e\i\6\g\l\r\o\t\s\7\9\y\z\b\p\w\g\f\z\6\n\4\1\y\5\5\3\u\i\2\5\8\m\c\j\f\9\f\n\5\w\x\t\a\9\j\t\p\3\a\y\q\o\t\2\k\r\0\7\v\4\w\1\3\w\z\2\a\e\1\n\q\6\7\s\k\b\f\4\s\3\m\i\4\m\x\p\k\9\d\4\d\j\5\x\s\7\w\v\v\g\m\v\1\g\v\6\n\8\h\o\z\r\1\e\2\l\g\3\l\l\i\a\k\7\w\6\k\6\1\y\u\b\e\n\7\u\b\z\d\3\s\d\u\9\b\r\w\h\7\q\l\r ]] 00:08:30.973 10:02:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:31.232 10:02:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:31.232 10:02:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:31.232 10:02:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:31.232 10:02:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:31.232 [2024-11-19 10:02:45.084833] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:31.232 [2024-11-19 10:02:45.085168] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61185 ] 00:08:31.232 { 00:08:31.232 "subsystems": [ 00:08:31.232 { 00:08:31.232 "subsystem": "bdev", 00:08:31.232 "config": [ 00:08:31.232 { 00:08:31.232 "params": { 00:08:31.232 "block_size": 512, 00:08:31.232 "num_blocks": 1048576, 00:08:31.232 "name": "malloc0" 00:08:31.232 }, 00:08:31.232 "method": "bdev_malloc_create" 00:08:31.232 }, 00:08:31.232 { 00:08:31.232 "params": { 00:08:31.232 "filename": "/dev/zram1", 00:08:31.232 "name": "uring0" 00:08:31.232 }, 00:08:31.232 "method": "bdev_uring_create" 00:08:31.232 }, 00:08:31.232 { 00:08:31.232 "method": "bdev_wait_for_examine" 00:08:31.232 } 00:08:31.232 ] 00:08:31.232 } 00:08:31.232 ] 00:08:31.232 } 00:08:31.490 [2024-11-19 10:02:45.232459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.490 [2024-11-19 10:02:45.294143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.490 [2024-11-19 10:02:45.354355] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.869  [2024-11-19T10:02:47.694Z] Copying: 150/512 [MB] (150 MBps) [2024-11-19T10:02:48.631Z] Copying: 302/512 [MB] (152 MBps) [2024-11-19T10:02:49.198Z] Copying: 455/512 [MB] (153 MBps) [2024-11-19T10:02:49.456Z] Copying: 512/512 [MB] (average 152 MBps) 00:08:35.567 00:08:35.567 10:02:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:35.567 10:02:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:35.567 10:02:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:35.567 10:02:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:35.567 10:02:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:35.567 10:02:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:35.567 10:02:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:35.567 10:02:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:35.567 [2024-11-19 10:02:49.350115] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:35.567 [2024-11-19 10:02:49.350203] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61241 ] 00:08:35.567 { 00:08:35.567 "subsystems": [ 00:08:35.567 { 00:08:35.567 "subsystem": "bdev", 00:08:35.567 "config": [ 00:08:35.567 { 00:08:35.567 "params": { 00:08:35.567 "block_size": 512, 00:08:35.567 "num_blocks": 1048576, 00:08:35.567 "name": "malloc0" 00:08:35.567 }, 00:08:35.567 "method": "bdev_malloc_create" 00:08:35.567 }, 00:08:35.567 { 00:08:35.567 "params": { 00:08:35.567 "filename": "/dev/zram1", 00:08:35.567 "name": "uring0" 00:08:35.567 }, 00:08:35.567 "method": "bdev_uring_create" 00:08:35.567 }, 00:08:35.567 { 00:08:35.567 "params": { 00:08:35.567 "name": "uring0" 00:08:35.567 }, 00:08:35.567 "method": "bdev_uring_delete" 00:08:35.567 }, 00:08:35.567 { 00:08:35.567 "method": "bdev_wait_for_examine" 00:08:35.567 } 00:08:35.567 ] 00:08:35.567 } 00:08:35.567 ] 00:08:35.567 } 00:08:35.824 [2024-11-19 10:02:49.491243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.824 [2024-11-19 10:02:49.553987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.824 [2024-11-19 10:02:49.608125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.083  [2024-11-19T10:02:50.230Z] Copying: 0/0 [B] (average 0 Bps) 00:08:36.341 00:08:36.341 10:02:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:36.341 10:02:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:36.341 10:02:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:36.341 10:02:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:36.341 10:02:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:08:36.341 10:02:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:36.341 10:02:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:36.341 10:02:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.341 10:02:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.341 10:02:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.601 10:02:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.601 10:02:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.601 10:02:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.601 10:02:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.601 10:02:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:36.601 10:02:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:36.601 [2024-11-19 10:02:50.288082] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:36.601 [2024-11-19 10:02:50.288210] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61272 ] 00:08:36.601 { 00:08:36.601 "subsystems": [ 00:08:36.601 { 00:08:36.601 "subsystem": "bdev", 00:08:36.601 "config": [ 00:08:36.601 { 00:08:36.601 "params": { 00:08:36.601 "block_size": 512, 00:08:36.601 "num_blocks": 1048576, 00:08:36.601 "name": "malloc0" 00:08:36.601 }, 00:08:36.601 "method": "bdev_malloc_create" 00:08:36.601 }, 00:08:36.601 { 00:08:36.601 "params": { 00:08:36.601 "filename": "/dev/zram1", 00:08:36.601 "name": "uring0" 00:08:36.601 }, 00:08:36.601 "method": "bdev_uring_create" 00:08:36.601 }, 00:08:36.601 { 00:08:36.601 "params": { 00:08:36.601 "name": "uring0" 00:08:36.601 }, 00:08:36.601 "method": "bdev_uring_delete" 00:08:36.601 }, 00:08:36.601 { 00:08:36.601 "method": "bdev_wait_for_examine" 00:08:36.601 } 00:08:36.601 ] 00:08:36.601 } 00:08:36.601 ] 00:08:36.601 } 00:08:36.601 [2024-11-19 10:02:50.433663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.601 [2024-11-19 10:02:50.489892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.860 [2024-11-19 10:02:50.545777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.120 [2024-11-19 10:02:50.786446] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:37.120 [2024-11-19 10:02:50.786504] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:37.120 [2024-11-19 10:02:50.786532] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:37.120 [2024-11-19 10:02:50.786543] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:37.379 [2024-11-19 10:02:51.106784] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:37.379 10:02:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:08:37.379 10:02:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:37.379 10:02:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:08:37.379 10:02:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:08:37.379 10:02:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:08:37.379 10:02:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:37.379 10:02:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:37.379 10:02:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:08:37.379 10:02:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:08:37.379 10:02:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:08:37.379 10:02:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:08:37.379 10:02:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:37.637 ************************************ 00:08:37.637 END TEST dd_uring_copy 00:08:37.637 ************************************ 00:08:37.637 00:08:37.637 real 0m14.928s 00:08:37.637 user 0m10.038s 00:08:37.637 sys 0m12.450s 00:08:37.637 10:02:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.637 10:02:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:37.637 ************************************ 00:08:37.637 END TEST spdk_dd_uring 00:08:37.637 ************************************ 00:08:37.637 00:08:37.637 real 0m15.172s 00:08:37.637 user 0m10.180s 00:08:37.637 sys 0m12.553s 00:08:37.637 10:02:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.637 10:02:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:37.637 10:02:51 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:37.637 10:02:51 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.637 10:02:51 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.637 10:02:51 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:37.896 ************************************ 00:08:37.896 START TEST spdk_dd_sparse 00:08:37.896 ************************************ 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:37.896 * Looking for test storage... 00:08:37.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:37.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.896 --rc genhtml_branch_coverage=1 00:08:37.896 --rc genhtml_function_coverage=1 00:08:37.896 --rc genhtml_legend=1 00:08:37.896 --rc geninfo_all_blocks=1 00:08:37.896 --rc geninfo_unexecuted_blocks=1 00:08:37.896 00:08:37.896 ' 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:37.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.896 --rc genhtml_branch_coverage=1 00:08:37.896 --rc genhtml_function_coverage=1 00:08:37.896 --rc genhtml_legend=1 00:08:37.896 --rc geninfo_all_blocks=1 00:08:37.896 --rc geninfo_unexecuted_blocks=1 00:08:37.896 00:08:37.896 ' 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:37.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.896 --rc genhtml_branch_coverage=1 00:08:37.896 --rc genhtml_function_coverage=1 00:08:37.896 --rc genhtml_legend=1 00:08:37.896 --rc geninfo_all_blocks=1 00:08:37.896 --rc geninfo_unexecuted_blocks=1 00:08:37.896 00:08:37.896 ' 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:37.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.896 --rc genhtml_branch_coverage=1 00:08:37.896 --rc genhtml_function_coverage=1 00:08:37.896 --rc genhtml_legend=1 00:08:37.896 --rc geninfo_all_blocks=1 00:08:37.896 --rc geninfo_unexecuted_blocks=1 00:08:37.896 00:08:37.896 ' 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:37.896 10:02:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:37.897 10:02:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:37.897 10:02:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:37.897 1+0 records in 00:08:37.897 1+0 records out 00:08:37.897 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00641229 s, 654 MB/s 00:08:37.897 10:02:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:37.897 1+0 records in 00:08:37.897 1+0 records out 00:08:37.897 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00722369 s, 581 MB/s 00:08:37.897 10:02:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:37.897 1+0 records in 00:08:37.897 1+0 records out 00:08:37.897 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00428113 s, 980 MB/s 00:08:37.897 10:02:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:37.897 10:02:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.897 10:02:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.897 10:02:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:37.897 ************************************ 00:08:37.897 START TEST dd_sparse_file_to_file 00:08:37.897 ************************************ 00:08:37.897 10:02:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:08:37.897 10:02:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:37.897 10:02:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:38.156 10:02:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:38.156 10:02:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:38.156 10:02:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:38.156 10:02:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:38.156 10:02:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:38.156 10:02:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:38.156 10:02:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:38.156 10:02:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:38.156 { 00:08:38.156 "subsystems": [ 00:08:38.156 { 00:08:38.156 "subsystem": "bdev", 00:08:38.156 "config": [ 00:08:38.156 { 00:08:38.156 "params": { 00:08:38.156 "block_size": 4096, 00:08:38.156 "filename": "dd_sparse_aio_disk", 00:08:38.156 "name": "dd_aio" 00:08:38.156 }, 00:08:38.156 "method": "bdev_aio_create" 00:08:38.156 }, 00:08:38.156 { 00:08:38.156 "params": { 00:08:38.156 "lvs_name": "dd_lvstore", 00:08:38.156 "bdev_name": "dd_aio" 00:08:38.156 }, 00:08:38.156 "method": "bdev_lvol_create_lvstore" 00:08:38.156 }, 00:08:38.156 { 00:08:38.156 "method": "bdev_wait_for_examine" 00:08:38.156 } 00:08:38.156 ] 00:08:38.156 } 00:08:38.156 ] 00:08:38.156 } 00:08:38.156 [2024-11-19 10:02:51.845187] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:38.156 [2024-11-19 10:02:51.845440] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61372 ] 00:08:38.156 [2024-11-19 10:02:52.003787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.415 [2024-11-19 10:02:52.076553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.415 [2024-11-19 10:02:52.140298] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:38.415  [2024-11-19T10:02:52.563Z] Copying: 12/36 [MB] (average 923 MBps) 00:08:38.674 00:08:38.674 10:02:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:38.674 10:02:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:38.674 10:02:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:38.674 10:02:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:38.674 10:02:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:38.674 10:02:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:38.674 10:02:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:38.674 10:02:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:38.674 10:02:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:38.674 10:02:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:38.674 00:08:38.674 real 0m0.723s 00:08:38.674 user 0m0.447s 00:08:38.674 sys 0m0.379s 00:08:38.674 ************************************ 00:08:38.674 END TEST dd_sparse_file_to_file 00:08:38.674 ************************************ 00:08:38.674 10:02:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.674 10:02:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:38.674 10:02:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:38.674 10:02:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.674 10:02:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.674 10:02:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:38.674 ************************************ 00:08:38.674 START TEST dd_sparse_file_to_bdev 00:08:38.674 ************************************ 00:08:38.674 10:02:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:08:38.674 10:02:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:38.674 10:02:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:38.674 10:02:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:38.933 10:02:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:38.933 10:02:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:38.933 10:02:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:38.933 10:02:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:38.933 10:02:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:38.933 { 00:08:38.933 "subsystems": [ 00:08:38.933 { 00:08:38.933 "subsystem": "bdev", 00:08:38.933 "config": [ 00:08:38.933 { 00:08:38.933 "params": { 00:08:38.933 "block_size": 4096, 00:08:38.933 "filename": "dd_sparse_aio_disk", 00:08:38.933 "name": "dd_aio" 00:08:38.933 }, 00:08:38.933 "method": "bdev_aio_create" 00:08:38.933 }, 00:08:38.933 { 00:08:38.933 "params": { 00:08:38.933 "lvs_name": "dd_lvstore", 00:08:38.933 "lvol_name": "dd_lvol", 00:08:38.933 "size_in_mib": 36, 00:08:38.933 "thin_provision": true 00:08:38.933 }, 00:08:38.933 "method": "bdev_lvol_create" 00:08:38.933 }, 00:08:38.933 { 00:08:38.933 "method": "bdev_wait_for_examine" 00:08:38.933 } 00:08:38.933 ] 00:08:38.933 } 00:08:38.933 ] 00:08:38.933 } 00:08:38.933 [2024-11-19 10:02:52.627880] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:38.933 [2024-11-19 10:02:52.628740] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61414 ] 00:08:38.933 [2024-11-19 10:02:52.792700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.193 [2024-11-19 10:02:52.858150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.193 [2024-11-19 10:02:52.920550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.193  [2024-11-19T10:02:53.341Z] Copying: 12/36 [MB] (average 500 MBps) 00:08:39.452 00:08:39.452 00:08:39.452 real 0m0.684s 00:08:39.452 user 0m0.441s 00:08:39.452 sys 0m0.379s 00:08:39.452 10:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.452 ************************************ 00:08:39.452 END TEST dd_sparse_file_to_bdev 00:08:39.452 ************************************ 00:08:39.452 10:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:39.452 10:02:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:39.452 10:02:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:39.452 10:02:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.452 10:02:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:39.452 ************************************ 00:08:39.452 START TEST dd_sparse_bdev_to_file 00:08:39.452 ************************************ 00:08:39.452 10:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:08:39.452 10:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:39.452 10:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:39.452 10:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:39.452 10:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:39.452 10:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:39.452 10:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:39.452 10:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:39.452 10:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:39.711 { 00:08:39.711 "subsystems": [ 00:08:39.711 { 00:08:39.711 "subsystem": "bdev", 00:08:39.711 "config": [ 00:08:39.711 { 00:08:39.711 "params": { 00:08:39.711 "block_size": 4096, 00:08:39.711 "filename": "dd_sparse_aio_disk", 00:08:39.711 "name": "dd_aio" 00:08:39.711 }, 00:08:39.711 "method": "bdev_aio_create" 00:08:39.711 }, 00:08:39.711 { 00:08:39.711 "method": "bdev_wait_for_examine" 00:08:39.711 } 00:08:39.711 ] 00:08:39.711 } 00:08:39.711 ] 00:08:39.711 } 00:08:39.711 [2024-11-19 10:02:53.350193] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:39.711 [2024-11-19 10:02:53.350274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61452 ] 00:08:39.711 [2024-11-19 10:02:53.498352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.711 [2024-11-19 10:02:53.551558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.970 [2024-11-19 10:02:53.609228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.970  [2024-11-19T10:02:54.117Z] Copying: 12/36 [MB] (average 1000 MBps) 00:08:40.228 00:08:40.228 10:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:40.228 10:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:40.228 10:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:40.228 10:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:40.228 10:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:40.228 10:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:40.228 10:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:40.228 10:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:40.228 10:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:40.228 10:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:40.228 00:08:40.228 real 0m0.645s 00:08:40.228 user 0m0.386s 00:08:40.228 sys 0m0.372s 00:08:40.228 10:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.228 10:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:40.228 ************************************ 00:08:40.228 END TEST dd_sparse_bdev_to_file 00:08:40.228 ************************************ 00:08:40.228 10:02:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:40.228 10:02:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:40.228 10:02:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:40.228 10:02:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:40.228 10:02:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:40.228 ************************************ 00:08:40.228 END TEST spdk_dd_sparse 00:08:40.228 ************************************ 00:08:40.228 00:08:40.228 real 0m2.477s 00:08:40.228 user 0m1.460s 00:08:40.228 sys 0m1.356s 00:08:40.228 10:02:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.228 10:02:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:40.228 10:02:54 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:40.228 10:02:54 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:40.228 10:02:54 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.228 10:02:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:40.228 ************************************ 00:08:40.228 START TEST spdk_dd_negative 00:08:40.228 ************************************ 00:08:40.229 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:40.488 * Looking for test storage... 00:08:40.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:40.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.488 --rc genhtml_branch_coverage=1 00:08:40.488 --rc genhtml_function_coverage=1 00:08:40.488 --rc genhtml_legend=1 00:08:40.488 --rc geninfo_all_blocks=1 00:08:40.488 --rc geninfo_unexecuted_blocks=1 00:08:40.488 00:08:40.488 ' 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:40.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.488 --rc genhtml_branch_coverage=1 00:08:40.488 --rc genhtml_function_coverage=1 00:08:40.488 --rc genhtml_legend=1 00:08:40.488 --rc geninfo_all_blocks=1 00:08:40.488 --rc geninfo_unexecuted_blocks=1 00:08:40.488 00:08:40.488 ' 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:40.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.488 --rc genhtml_branch_coverage=1 00:08:40.488 --rc genhtml_function_coverage=1 00:08:40.488 --rc genhtml_legend=1 00:08:40.488 --rc geninfo_all_blocks=1 00:08:40.488 --rc geninfo_unexecuted_blocks=1 00:08:40.488 00:08:40.488 ' 00:08:40.488 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:40.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.488 --rc genhtml_branch_coverage=1 00:08:40.488 --rc genhtml_function_coverage=1 00:08:40.488 --rc genhtml_legend=1 00:08:40.488 --rc geninfo_all_blocks=1 00:08:40.488 --rc geninfo_unexecuted_blocks=1 00:08:40.488 00:08:40.488 ' 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:40.489 ************************************ 00:08:40.489 START TEST dd_invalid_arguments 00:08:40.489 ************************************ 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:40.489 10:02:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:40.489 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:40.489 00:08:40.489 CPU options: 00:08:40.489 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:40.489 (like [0,1,10]) 00:08:40.489 --lcores lcore to CPU mapping list. The list is in the format: 00:08:40.489 [<,lcores[@CPUs]>...] 00:08:40.489 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:40.489 Within the group, '-' is used for range separator, 00:08:40.489 ',' is used for single number separator. 00:08:40.489 '( )' can be omitted for single element group, 00:08:40.489 '@' can be omitted if cpus and lcores have the same value 00:08:40.489 --disable-cpumask-locks Disable CPU core lock files. 00:08:40.489 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:40.489 pollers in the app support interrupt mode) 00:08:40.489 -p, --main-core main (primary) core for DPDK 00:08:40.489 00:08:40.489 Configuration options: 00:08:40.489 -c, --config, --json JSON config file 00:08:40.489 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:40.489 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:40.489 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:40.489 --rpcs-allowed comma-separated list of permitted RPCS 00:08:40.489 --json-ignore-init-errors don't exit on invalid config entry 00:08:40.489 00:08:40.489 Memory options: 00:08:40.489 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:40.489 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:40.489 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:40.489 -R, --huge-unlink unlink huge files after initialization 00:08:40.489 -n, --mem-channels number of memory channels used for DPDK 00:08:40.489 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:40.489 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:40.489 --no-huge run without using hugepages 00:08:40.489 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:08:40.489 -i, --shm-id shared memory ID (optional) 00:08:40.489 -g, --single-file-segments force creating just one hugetlbfs file 00:08:40.489 00:08:40.489 PCI options: 00:08:40.489 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:40.489 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:40.489 -u, --no-pci disable PCI access 00:08:40.489 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:40.489 00:08:40.489 Log options: 00:08:40.489 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:40.489 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:40.489 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:40.489 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:40.489 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:08:40.489 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:08:40.489 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:08:40.489 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:08:40.489 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:08:40.489 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:08:40.489 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:40.489 --silence-noticelog disable notice level logging to stderr 00:08:40.489 00:08:40.489 Trace options: 00:08:40.489 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:40.489 setting 0 to disable trace (default 32768) 00:08:40.489 Tracepoints vary in size and can use more than one trace entry. 00:08:40.489 -e, --tpoint-group [:] 00:08:40.489 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:40.489 [2024-11-19 10:02:54.339617] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:40.489 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:08:40.489 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:08:40.489 bdev_raid, scheduler, all). 00:08:40.489 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:40.489 a tracepoint group. First tpoint inside a group can be enabled by 00:08:40.489 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:40.489 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:40.489 in /include/spdk_internal/trace_defs.h 00:08:40.489 00:08:40.489 Other options: 00:08:40.489 -h, --help show this usage 00:08:40.489 -v, --version print SPDK version 00:08:40.489 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:40.489 --env-context Opaque context for use of the env implementation 00:08:40.489 00:08:40.489 Application specific: 00:08:40.489 [--------- DD Options ---------] 00:08:40.489 --if Input file. Must specify either --if or --ib. 00:08:40.489 --ib Input bdev. Must specifier either --if or --ib 00:08:40.490 --of Output file. Must specify either --of or --ob. 00:08:40.490 --ob Output bdev. Must specify either --of or --ob. 00:08:40.490 --iflag Input file flags. 00:08:40.490 --oflag Output file flags. 00:08:40.490 --bs I/O unit size (default: 4096) 00:08:40.490 --qd Queue depth (default: 2) 00:08:40.490 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:40.490 --skip Skip this many I/O units at start of input. (default: 0) 00:08:40.490 --seek Skip this many I/O units at start of output. (default: 0) 00:08:40.490 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:40.490 --sparse Enable hole skipping in input target 00:08:40.490 Available iflag and oflag values: 00:08:40.490 append - append mode 00:08:40.490 direct - use direct I/O for data 00:08:40.490 directory - fail unless a directory 00:08:40.490 dsync - use synchronized I/O for data 00:08:40.490 noatime - do not update access time 00:08:40.490 noctty - do not assign controlling terminal from file 00:08:40.490 nofollow - do not follow symlinks 00:08:40.490 nonblock - use non-blocking I/O 00:08:40.490 sync - use synchronized I/O for data and metadata 00:08:40.490 ************************************ 00:08:40.490 END TEST dd_invalid_arguments 00:08:40.490 ************************************ 00:08:40.490 10:02:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:08:40.490 10:02:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:40.490 10:02:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:40.490 10:02:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:40.490 00:08:40.490 real 0m0.086s 00:08:40.490 user 0m0.050s 00:08:40.490 sys 0m0.034s 00:08:40.490 10:02:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.490 10:02:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:40.749 ************************************ 00:08:40.749 START TEST dd_double_input 00:08:40.749 ************************************ 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:40.749 [2024-11-19 10:02:54.463827] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:40.749 00:08:40.749 real 0m0.063s 00:08:40.749 user 0m0.040s 00:08:40.749 sys 0m0.022s 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:40.749 ************************************ 00:08:40.749 END TEST dd_double_input 00:08:40.749 ************************************ 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:40.749 ************************************ 00:08:40.749 START TEST dd_double_output 00:08:40.749 ************************************ 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:40.749 [2024-11-19 10:02:54.590744] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:40.749 00:08:40.749 real 0m0.079s 00:08:40.749 user 0m0.041s 00:08:40.749 sys 0m0.037s 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.749 ************************************ 00:08:40.749 END TEST dd_double_output 00:08:40.749 ************************************ 00:08:40.749 10:02:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:41.008 ************************************ 00:08:41.008 START TEST dd_no_input 00:08:41.008 ************************************ 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:41.008 [2024-11-19 10:02:54.719853] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:41.008 00:08:41.008 real 0m0.076s 00:08:41.008 user 0m0.048s 00:08:41.008 sys 0m0.028s 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:41.008 ************************************ 00:08:41.008 END TEST dd_no_input 00:08:41.008 ************************************ 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:41.008 ************************************ 00:08:41.008 START TEST dd_no_output 00:08:41.008 ************************************ 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:41.008 [2024-11-19 10:02:54.846096] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:41.008 00:08:41.008 real 0m0.078s 00:08:41.008 user 0m0.048s 00:08:41.008 sys 0m0.029s 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.008 10:02:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:41.008 ************************************ 00:08:41.008 END TEST dd_no_output 00:08:41.009 ************************************ 00:08:41.267 10:02:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:41.267 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.267 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.267 10:02:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:41.267 ************************************ 00:08:41.267 START TEST dd_wrong_blocksize 00:08:41.267 ************************************ 00:08:41.267 10:02:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:08:41.267 10:02:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:41.267 10:02:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:08:41.267 10:02:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:41.267 10:02:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.267 10:02:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.267 10:02:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.267 10:02:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.267 10:02:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.267 10:02:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.267 10:02:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.267 10:02:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:41.267 10:02:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:41.267 [2024-11-19 10:02:54.973143] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:41.267 10:02:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:08:41.267 10:02:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:41.267 10:02:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:41.267 10:02:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:41.267 00:08:41.267 real 0m0.075s 00:08:41.267 user 0m0.051s 00:08:41.267 sys 0m0.022s 00:08:41.267 10:02:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.267 10:02:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:41.267 ************************************ 00:08:41.267 END TEST dd_wrong_blocksize 00:08:41.268 ************************************ 00:08:41.268 10:02:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:41.268 10:02:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.268 10:02:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.268 10:02:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:41.268 ************************************ 00:08:41.268 START TEST dd_smaller_blocksize 00:08:41.268 ************************************ 00:08:41.268 10:02:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:08:41.268 10:02:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:41.268 10:02:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:08:41.268 10:02:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:41.268 10:02:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.268 10:02:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.268 10:02:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.268 10:02:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.268 10:02:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.268 10:02:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.268 10:02:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.268 10:02:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:41.268 10:02:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:41.268 [2024-11-19 10:02:55.094697] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:41.268 [2024-11-19 10:02:55.095230] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61681 ] 00:08:41.526 [2024-11-19 10:02:55.253525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.526 [2024-11-19 10:02:55.319729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.526 [2024-11-19 10:02:55.380688] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.093 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:42.351 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:42.351 [2024-11-19 10:02:56.011566] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:42.351 [2024-11-19 10:02:56.011630] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:42.351 [2024-11-19 10:02:56.145735] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:42.351 10:02:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:08:42.351 10:02:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:42.351 10:02:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:08:42.351 10:02:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:08:42.351 10:02:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:08:42.351 10:02:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:42.351 00:08:42.351 real 0m1.171s 00:08:42.351 user 0m0.420s 00:08:42.351 sys 0m0.641s 00:08:42.351 10:02:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.351 ************************************ 00:08:42.351 END TEST dd_smaller_blocksize 00:08:42.351 ************************************ 00:08:42.351 10:02:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:42.610 ************************************ 00:08:42.610 START TEST dd_invalid_count 00:08:42.610 ************************************ 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:42.610 [2024-11-19 10:02:56.323339] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:42.610 00:08:42.610 real 0m0.081s 00:08:42.610 user 0m0.044s 00:08:42.610 sys 0m0.036s 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:42.610 ************************************ 00:08:42.610 END TEST dd_invalid_count 00:08:42.610 ************************************ 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.610 10:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.611 10:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:42.611 ************************************ 00:08:42.611 START TEST dd_invalid_oflag 00:08:42.611 ************************************ 00:08:42.611 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:08:42.611 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:42.611 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:08:42.611 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:42.611 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.611 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.611 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.611 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.611 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.611 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.611 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.611 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:42.611 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:42.611 [2024-11-19 10:02:56.447169] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:42.611 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:08:42.611 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:42.611 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:42.611 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:42.611 00:08:42.611 real 0m0.067s 00:08:42.611 user 0m0.040s 00:08:42.611 sys 0m0.026s 00:08:42.611 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.611 ************************************ 00:08:42.611 END TEST dd_invalid_oflag 00:08:42.611 ************************************ 00:08:42.611 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:42.870 ************************************ 00:08:42.870 START TEST dd_invalid_iflag 00:08:42.870 ************************************ 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:42.870 [2024-11-19 10:02:56.574145] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:42.870 00:08:42.870 real 0m0.080s 00:08:42.870 user 0m0.053s 00:08:42.870 sys 0m0.026s 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.870 ************************************ 00:08:42.870 END TEST dd_invalid_iflag 00:08:42.870 ************************************ 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:42.870 ************************************ 00:08:42.870 START TEST dd_unknown_flag 00:08:42.870 ************************************ 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.870 10:02:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.871 10:02:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.871 10:02:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:42.871 10:02:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:42.871 [2024-11-19 10:02:56.705072] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:42.871 [2024-11-19 10:02:56.705156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61776 ] 00:08:43.130 [2024-11-19 10:02:56.855909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.130 [2024-11-19 10:02:56.920418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.130 [2024-11-19 10:02:56.980704] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:43.390 [2024-11-19 10:02:57.020465] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:43.390 [2024-11-19 10:02:57.020535] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:43.390 [2024-11-19 10:02:57.020595] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:43.390 [2024-11-19 10:02:57.020610] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:43.390 [2024-11-19 10:02:57.020836] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:43.390 [2024-11-19 10:02:57.020853] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:43.390 [2024-11-19 10:02:57.020908] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:43.390 [2024-11-19 10:02:57.020934] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:43.390 [2024-11-19 10:02:57.147776] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:43.390 ************************************ 00:08:43.390 END TEST dd_unknown_flag 00:08:43.390 ************************************ 00:08:43.390 00:08:43.390 real 0m0.571s 00:08:43.390 user 0m0.315s 00:08:43.390 sys 0m0.166s 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:43.390 ************************************ 00:08:43.390 START TEST dd_invalid_json 00:08:43.390 ************************************ 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:43.390 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:43.681 [2024-11-19 10:02:57.326591] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:43.681 [2024-11-19 10:02:57.326711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61810 ] 00:08:43.681 [2024-11-19 10:02:57.476774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.681 [2024-11-19 10:02:57.538451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.681 [2024-11-19 10:02:57.538540] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:43.681 [2024-11-19 10:02:57.538559] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:43.681 [2024-11-19 10:02:57.538569] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:43.681 [2024-11-19 10:02:57.538609] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:43.940 00:08:43.940 real 0m0.344s 00:08:43.940 user 0m0.179s 00:08:43.940 sys 0m0.063s 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.940 ************************************ 00:08:43.940 END TEST dd_invalid_json 00:08:43.940 ************************************ 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:43.940 ************************************ 00:08:43.940 START TEST dd_invalid_seek 00:08:43.940 ************************************ 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.940 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:43.941 10:02:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:43.941 { 00:08:43.941 "subsystems": [ 00:08:43.941 { 00:08:43.941 "subsystem": "bdev", 00:08:43.941 "config": [ 00:08:43.941 { 00:08:43.941 "params": { 00:08:43.941 "block_size": 512, 00:08:43.941 "num_blocks": 512, 00:08:43.941 "name": "malloc0" 00:08:43.941 }, 00:08:43.941 "method": "bdev_malloc_create" 00:08:43.941 }, 00:08:43.941 { 00:08:43.941 "params": { 00:08:43.941 "block_size": 512, 00:08:43.941 "num_blocks": 512, 00:08:43.941 "name": "malloc1" 00:08:43.941 }, 00:08:43.941 "method": "bdev_malloc_create" 00:08:43.941 }, 00:08:43.941 { 00:08:43.941 "method": "bdev_wait_for_examine" 00:08:43.941 } 00:08:43.941 ] 00:08:43.941 } 00:08:43.941 ] 00:08:43.941 } 00:08:43.941 [2024-11-19 10:02:57.720778] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:43.941 [2024-11-19 10:02:57.720878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61834 ] 00:08:44.199 [2024-11-19 10:02:57.868666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.199 [2024-11-19 10:02:57.930012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.199 [2024-11-19 10:02:57.989538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:44.199 [2024-11-19 10:02:58.053944] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:08:44.199 [2024-11-19 10:02:58.054050] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:44.458 [2024-11-19 10:02:58.179320] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:44.458 00:08:44.458 real 0m0.593s 00:08:44.458 user 0m0.383s 00:08:44.458 sys 0m0.169s 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:44.458 ************************************ 00:08:44.458 END TEST dd_invalid_seek 00:08:44.458 ************************************ 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:44.458 ************************************ 00:08:44.458 START TEST dd_invalid_skip 00:08:44.458 ************************************ 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:44.458 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:44.459 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.459 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:44.459 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.459 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:44.459 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.459 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:44.459 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:44.718 { 00:08:44.718 "subsystems": [ 00:08:44.718 { 00:08:44.718 "subsystem": "bdev", 00:08:44.718 "config": [ 00:08:44.718 { 00:08:44.718 "params": { 00:08:44.718 "block_size": 512, 00:08:44.718 "num_blocks": 512, 00:08:44.718 "name": "malloc0" 00:08:44.718 }, 00:08:44.718 "method": "bdev_malloc_create" 00:08:44.718 }, 00:08:44.718 { 00:08:44.718 "params": { 00:08:44.718 "block_size": 512, 00:08:44.718 "num_blocks": 512, 00:08:44.718 "name": "malloc1" 00:08:44.718 }, 00:08:44.718 "method": "bdev_malloc_create" 00:08:44.718 }, 00:08:44.718 { 00:08:44.718 "method": "bdev_wait_for_examine" 00:08:44.718 } 00:08:44.718 ] 00:08:44.718 } 00:08:44.718 ] 00:08:44.718 } 00:08:44.718 [2024-11-19 10:02:58.370728] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:44.718 [2024-11-19 10:02:58.370878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61871 ] 00:08:44.718 [2024-11-19 10:02:58.518585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.718 [2024-11-19 10:02:58.582188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.977 [2024-11-19 10:02:58.642760] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:44.977 [2024-11-19 10:02:58.707903] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:08:44.977 [2024-11-19 10:02:58.708006] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:44.977 [2024-11-19 10:02:58.837689] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:45.237 00:08:45.237 real 0m0.604s 00:08:45.237 user 0m0.395s 00:08:45.237 sys 0m0.168s 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:45.237 ************************************ 00:08:45.237 END TEST dd_invalid_skip 00:08:45.237 ************************************ 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:45.237 ************************************ 00:08:45.237 START TEST dd_invalid_input_count 00:08:45.237 ************************************ 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:45.237 10:02:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:45.237 [2024-11-19 10:02:59.014808] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:45.237 [2024-11-19 10:02:59.014949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61908 ] 00:08:45.237 { 00:08:45.237 "subsystems": [ 00:08:45.237 { 00:08:45.237 "subsystem": "bdev", 00:08:45.237 "config": [ 00:08:45.237 { 00:08:45.237 "params": { 00:08:45.237 "block_size": 512, 00:08:45.237 "num_blocks": 512, 00:08:45.237 "name": "malloc0" 00:08:45.237 }, 00:08:45.237 "method": "bdev_malloc_create" 00:08:45.237 }, 00:08:45.237 { 00:08:45.237 "params": { 00:08:45.237 "block_size": 512, 00:08:45.238 "num_blocks": 512, 00:08:45.238 "name": "malloc1" 00:08:45.238 }, 00:08:45.238 "method": "bdev_malloc_create" 00:08:45.238 }, 00:08:45.238 { 00:08:45.238 "method": "bdev_wait_for_examine" 00:08:45.238 } 00:08:45.238 ] 00:08:45.238 } 00:08:45.238 ] 00:08:45.238 } 00:08:45.495 [2024-11-19 10:02:59.160289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.495 [2024-11-19 10:02:59.224172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.495 [2024-11-19 10:02:59.281933] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:45.495 [2024-11-19 10:02:59.347014] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:08:45.495 [2024-11-19 10:02:59.347078] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:45.753 [2024-11-19 10:02:59.472702] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:45.753 00:08:45.753 real 0m0.576s 00:08:45.753 user 0m0.367s 00:08:45.753 sys 0m0.167s 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:45.753 ************************************ 00:08:45.753 END TEST dd_invalid_input_count 00:08:45.753 ************************************ 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:45.753 ************************************ 00:08:45.753 START TEST dd_invalid_output_count 00:08:45.753 ************************************ 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:45.753 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:45.754 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.754 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:45.754 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.754 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:45.754 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.754 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:45.754 10:02:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:46.013 { 00:08:46.013 "subsystems": [ 00:08:46.013 { 00:08:46.013 "subsystem": "bdev", 00:08:46.013 "config": [ 00:08:46.013 { 00:08:46.013 "params": { 00:08:46.013 "block_size": 512, 00:08:46.013 "num_blocks": 512, 00:08:46.013 "name": "malloc0" 00:08:46.013 }, 00:08:46.013 "method": "bdev_malloc_create" 00:08:46.013 }, 00:08:46.013 { 00:08:46.013 "method": "bdev_wait_for_examine" 00:08:46.013 } 00:08:46.013 ] 00:08:46.013 } 00:08:46.013 ] 00:08:46.013 } 00:08:46.013 [2024-11-19 10:02:59.666870] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:46.013 [2024-11-19 10:02:59.667021] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61940 ] 00:08:46.013 [2024-11-19 10:02:59.813829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.013 [2024-11-19 10:02:59.870048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.271 [2024-11-19 10:02:59.923623] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.271 [2024-11-19 10:02:59.975099] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:08:46.271 [2024-11-19 10:02:59.975197] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:46.271 [2024-11-19 10:03:00.088065] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:46.271 10:03:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:08:46.271 10:03:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:46.271 10:03:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:08:46.271 10:03:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:08:46.271 10:03:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:08:46.271 10:03:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:46.271 00:08:46.271 real 0m0.568s 00:08:46.271 user 0m0.370s 00:08:46.271 sys 0m0.159s 00:08:46.271 10:03:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.271 10:03:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:46.271 ************************************ 00:08:46.271 END TEST dd_invalid_output_count 00:08:46.271 ************************************ 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:46.532 ************************************ 00:08:46.532 START TEST dd_bs_not_multiple 00:08:46.532 ************************************ 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:46.532 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:46.532 [2024-11-19 10:03:00.272268] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:46.532 [2024-11-19 10:03:00.272863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61972 ] 00:08:46.532 { 00:08:46.532 "subsystems": [ 00:08:46.532 { 00:08:46.532 "subsystem": "bdev", 00:08:46.532 "config": [ 00:08:46.532 { 00:08:46.532 "params": { 00:08:46.532 "block_size": 512, 00:08:46.532 "num_blocks": 512, 00:08:46.532 "name": "malloc0" 00:08:46.532 }, 00:08:46.532 "method": "bdev_malloc_create" 00:08:46.532 }, 00:08:46.532 { 00:08:46.532 "params": { 00:08:46.532 "block_size": 512, 00:08:46.532 "num_blocks": 512, 00:08:46.532 "name": "malloc1" 00:08:46.532 }, 00:08:46.532 "method": "bdev_malloc_create" 00:08:46.532 }, 00:08:46.532 { 00:08:46.532 "method": "bdev_wait_for_examine" 00:08:46.532 } 00:08:46.532 ] 00:08:46.532 } 00:08:46.532 ] 00:08:46.532 } 00:08:46.532 [2024-11-19 10:03:00.414544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.792 [2024-11-19 10:03:00.466607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.792 [2024-11-19 10:03:00.520540] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.792 [2024-11-19 10:03:00.585696] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:08:46.792 [2024-11-19 10:03:00.585840] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:47.051 [2024-11-19 10:03:00.711883] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:47.051 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:08:47.051 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:47.051 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:08:47.051 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:08:47.051 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:08:47.051 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:47.051 00:08:47.051 real 0m0.572s 00:08:47.051 user 0m0.365s 00:08:47.051 sys 0m0.161s 00:08:47.051 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.051 10:03:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:47.051 ************************************ 00:08:47.051 END TEST dd_bs_not_multiple 00:08:47.051 ************************************ 00:08:47.051 00:08:47.051 real 0m6.776s 00:08:47.051 user 0m3.612s 00:08:47.051 sys 0m2.567s 00:08:47.051 ************************************ 00:08:47.051 END TEST spdk_dd_negative 00:08:47.051 ************************************ 00:08:47.051 10:03:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.051 10:03:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:47.051 00:08:47.052 real 1m18.441s 00:08:47.052 user 0m49.826s 00:08:47.052 sys 0m34.992s 00:08:47.052 10:03:00 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.052 ************************************ 00:08:47.052 END TEST spdk_dd 00:08:47.052 ************************************ 00:08:47.052 10:03:00 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:47.052 10:03:00 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:47.052 10:03:00 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:47.052 10:03:00 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:47.052 10:03:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:47.052 10:03:00 -- common/autotest_common.sh@10 -- # set +x 00:08:47.312 10:03:00 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:47.312 10:03:00 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:08:47.312 10:03:00 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:08:47.312 10:03:00 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:08:47.312 10:03:00 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:08:47.312 10:03:00 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:08:47.312 10:03:00 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:47.312 10:03:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:47.312 10:03:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.312 10:03:00 -- common/autotest_common.sh@10 -- # set +x 00:08:47.312 ************************************ 00:08:47.312 START TEST nvmf_tcp 00:08:47.312 ************************************ 00:08:47.312 10:03:00 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:47.312 * Looking for test storage... 00:08:47.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:47.312 10:03:01 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:47.312 10:03:01 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:47.312 10:03:01 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:47.312 10:03:01 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.312 10:03:01 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:47.312 10:03:01 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.312 10:03:01 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:47.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.312 --rc genhtml_branch_coverage=1 00:08:47.312 --rc genhtml_function_coverage=1 00:08:47.312 --rc genhtml_legend=1 00:08:47.312 --rc geninfo_all_blocks=1 00:08:47.312 --rc geninfo_unexecuted_blocks=1 00:08:47.312 00:08:47.312 ' 00:08:47.312 10:03:01 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:47.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.312 --rc genhtml_branch_coverage=1 00:08:47.312 --rc genhtml_function_coverage=1 00:08:47.312 --rc genhtml_legend=1 00:08:47.312 --rc geninfo_all_blocks=1 00:08:47.312 --rc geninfo_unexecuted_blocks=1 00:08:47.312 00:08:47.312 ' 00:08:47.312 10:03:01 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:47.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.312 --rc genhtml_branch_coverage=1 00:08:47.312 --rc genhtml_function_coverage=1 00:08:47.312 --rc genhtml_legend=1 00:08:47.312 --rc geninfo_all_blocks=1 00:08:47.312 --rc geninfo_unexecuted_blocks=1 00:08:47.312 00:08:47.312 ' 00:08:47.312 10:03:01 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:47.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.312 --rc genhtml_branch_coverage=1 00:08:47.312 --rc genhtml_function_coverage=1 00:08:47.312 --rc genhtml_legend=1 00:08:47.312 --rc geninfo_all_blocks=1 00:08:47.312 --rc geninfo_unexecuted_blocks=1 00:08:47.312 00:08:47.312 ' 00:08:47.312 10:03:01 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:47.312 10:03:01 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:47.312 10:03:01 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:47.312 10:03:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:47.312 10:03:01 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.312 10:03:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:47.312 ************************************ 00:08:47.312 START TEST nvmf_target_core 00:08:47.312 ************************************ 00:08:47.312 10:03:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:47.572 * Looking for test storage... 00:08:47.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:47.572 10:03:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:47.572 10:03:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:47.572 10:03:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:08:47.572 10:03:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:47.572 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.572 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.572 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.572 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.572 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.572 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.572 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.572 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.572 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.572 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:47.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.573 --rc genhtml_branch_coverage=1 00:08:47.573 --rc genhtml_function_coverage=1 00:08:47.573 --rc genhtml_legend=1 00:08:47.573 --rc geninfo_all_blocks=1 00:08:47.573 --rc geninfo_unexecuted_blocks=1 00:08:47.573 00:08:47.573 ' 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:47.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.573 --rc genhtml_branch_coverage=1 00:08:47.573 --rc genhtml_function_coverage=1 00:08:47.573 --rc genhtml_legend=1 00:08:47.573 --rc geninfo_all_blocks=1 00:08:47.573 --rc geninfo_unexecuted_blocks=1 00:08:47.573 00:08:47.573 ' 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:47.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.573 --rc genhtml_branch_coverage=1 00:08:47.573 --rc genhtml_function_coverage=1 00:08:47.573 --rc genhtml_legend=1 00:08:47.573 --rc geninfo_all_blocks=1 00:08:47.573 --rc geninfo_unexecuted_blocks=1 00:08:47.573 00:08:47.573 ' 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:47.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.573 --rc genhtml_branch_coverage=1 00:08:47.573 --rc genhtml_function_coverage=1 00:08:47.573 --rc genhtml_legend=1 00:08:47.573 --rc geninfo_all_blocks=1 00:08:47.573 --rc geninfo_unexecuted_blocks=1 00:08:47.573 00:08:47.573 ' 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:47.573 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.573 ************************************ 00:08:47.573 START TEST nvmf_host_management 00:08:47.573 ************************************ 00:08:47.573 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:47.834 * Looking for test storage... 00:08:47.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:47.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.834 --rc genhtml_branch_coverage=1 00:08:47.834 --rc genhtml_function_coverage=1 00:08:47.834 --rc genhtml_legend=1 00:08:47.834 --rc geninfo_all_blocks=1 00:08:47.834 --rc geninfo_unexecuted_blocks=1 00:08:47.834 00:08:47.834 ' 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:47.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.834 --rc genhtml_branch_coverage=1 00:08:47.834 --rc genhtml_function_coverage=1 00:08:47.834 --rc genhtml_legend=1 00:08:47.834 --rc geninfo_all_blocks=1 00:08:47.834 --rc geninfo_unexecuted_blocks=1 00:08:47.834 00:08:47.834 ' 00:08:47.834 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:47.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.834 --rc genhtml_branch_coverage=1 00:08:47.834 --rc genhtml_function_coverage=1 00:08:47.835 --rc genhtml_legend=1 00:08:47.835 --rc geninfo_all_blocks=1 00:08:47.835 --rc geninfo_unexecuted_blocks=1 00:08:47.835 00:08:47.835 ' 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:47.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.835 --rc genhtml_branch_coverage=1 00:08:47.835 --rc genhtml_function_coverage=1 00:08:47.835 --rc genhtml_legend=1 00:08:47.835 --rc geninfo_all_blocks=1 00:08:47.835 --rc geninfo_unexecuted_blocks=1 00:08:47.835 00:08:47.835 ' 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:47.835 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:47.835 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:47.836 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:47.836 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:47.836 Cannot find device "nvmf_init_br" 00:08:47.836 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:47.836 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:47.836 Cannot find device "nvmf_init_br2" 00:08:47.836 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:47.836 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:47.836 Cannot find device "nvmf_tgt_br" 00:08:47.836 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:08:47.836 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:47.836 Cannot find device "nvmf_tgt_br2" 00:08:47.836 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:08:47.836 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:47.836 Cannot find device "nvmf_init_br" 00:08:47.836 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:08:47.836 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:47.836 Cannot find device "nvmf_init_br2" 00:08:47.836 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:08:47.836 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:48.095 Cannot find device "nvmf_tgt_br" 00:08:48.095 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:08:48.095 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:48.095 Cannot find device "nvmf_tgt_br2" 00:08:48.095 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:08:48.095 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:48.095 Cannot find device "nvmf_br" 00:08:48.095 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:08:48.095 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:48.095 Cannot find device "nvmf_init_if" 00:08:48.095 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:08:48.095 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:48.095 Cannot find device "nvmf_init_if2" 00:08:48.095 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:08:48.095 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:48.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:48.095 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:08:48.095 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:48.096 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:48.096 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:08:48.096 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:48.096 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:48.096 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:48.096 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:48.096 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:48.096 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:48.096 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:48.096 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:48.096 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:48.096 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:48.096 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:48.096 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:48.096 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:48.096 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:48.096 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:48.096 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:48.096 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:48.096 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:48.096 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:48.096 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:48.096 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:48.356 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:48.356 10:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:48.356 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:48.356 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:08:48.356 00:08:48.356 --- 10.0.0.3 ping statistics --- 00:08:48.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.356 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:48.356 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:48.356 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:08:48.356 00:08:48.356 --- 10.0.0.4 ping statistics --- 00:08:48.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.356 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:48.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:48.356 00:08:48.356 --- 10.0.0.1 ping statistics --- 00:08:48.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.356 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:48.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:08:48.356 00:08:48.356 --- 10.0.0.2 ping statistics --- 00:08:48.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.356 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62326 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62326 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62326 ']' 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.356 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.356 [2024-11-19 10:03:02.230406] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:48.356 [2024-11-19 10:03:02.230533] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.685 [2024-11-19 10:03:02.388264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:48.685 [2024-11-19 10:03:02.458686] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.685 [2024-11-19 10:03:02.458769] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.685 [2024-11-19 10:03:02.458791] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.685 [2024-11-19 10:03:02.458802] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.685 [2024-11-19 10:03:02.458811] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.686 [2024-11-19 10:03:02.460059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.686 [2024-11-19 10:03:02.460199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:48.686 [2024-11-19 10:03:02.460340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:48.686 [2024-11-19 10:03:02.460346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.686 [2024-11-19 10:03:02.518286] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:48.955 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.955 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:48.955 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:48.955 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:48.955 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.955 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.955 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:48.955 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.955 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.955 [2024-11-19 10:03:02.629751] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.956 Malloc0 00:08:48.956 [2024-11-19 10:03:02.717784] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62372 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62372 /var/tmp/bdevperf.sock 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62372 ']' 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:48.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:48.956 { 00:08:48.956 "params": { 00:08:48.956 "name": "Nvme$subsystem", 00:08:48.956 "trtype": "$TEST_TRANSPORT", 00:08:48.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:48.956 "adrfam": "ipv4", 00:08:48.956 "trsvcid": "$NVMF_PORT", 00:08:48.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:48.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:48.956 "hdgst": ${hdgst:-false}, 00:08:48.956 "ddgst": ${ddgst:-false} 00:08:48.956 }, 00:08:48.956 "method": "bdev_nvme_attach_controller" 00:08:48.956 } 00:08:48.956 EOF 00:08:48.956 )") 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:48.956 10:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:48.956 "params": { 00:08:48.956 "name": "Nvme0", 00:08:48.956 "trtype": "tcp", 00:08:48.956 "traddr": "10.0.0.3", 00:08:48.956 "adrfam": "ipv4", 00:08:48.956 "trsvcid": "4420", 00:08:48.956 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:48.956 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:48.956 "hdgst": false, 00:08:48.956 "ddgst": false 00:08:48.956 }, 00:08:48.956 "method": "bdev_nvme_attach_controller" 00:08:48.956 }' 00:08:48.956 [2024-11-19 10:03:02.833829] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:48.956 [2024-11-19 10:03:02.834899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62372 ] 00:08:49.215 [2024-11-19 10:03:02.992094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.216 [2024-11-19 10:03:03.053928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.475 [2024-11-19 10:03:03.120836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.475 Running I/O for 10 seconds... 00:08:50.042 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.042 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:50.042 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:50.042 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.042 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:50.042 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.042 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:50.042 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:50.042 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:50.042 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:50.042 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:50.042 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:50.042 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:50.042 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:50.042 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:50.042 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:50.042 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.043 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:50.303 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.303 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:08:50.303 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:08:50.303 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:50.303 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:50.303 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:50.303 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:50.303 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.303 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:50.303 [2024-11-19 10:03:03.971790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.303 [2024-11-19 10:03:03.971840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.303 [2024-11-19 10:03:03.971854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.303 [2024-11-19 10:03:03.971865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.303 [2024-11-19 10:03:03.971874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.303 [2024-11-19 10:03:03.971885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.303 [2024-11-19 10:03:03.971895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.303 [2024-11-19 10:03:03.971905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.303 [2024-11-19 10:03:03.971931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.971943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.971953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.971964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.971974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.971983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.971993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6e50 is same with the state(6) to be set 00:08:50.304 [2024-11-19 10:03:03.972626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.304 [2024-11-19 10:03:03.972657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.304 [2024-11-19 10:03:03.972679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.304 [2024-11-19 10:03:03.972690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.304 [2024-11-19 10:03:03.972702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.304 [2024-11-19 10:03:03.972711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.304 [2024-11-19 10:03:03.972723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.304 [2024-11-19 10:03:03.972732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.304 [2024-11-19 10:03:03.972744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.304 [2024-11-19 10:03:03.972753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.304 [2024-11-19 10:03:03.972764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.304 [2024-11-19 10:03:03.972773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.304 [2024-11-19 10:03:03.972785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.304 [2024-11-19 10:03:03.972794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.304 [2024-11-19 10:03:03.972805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.304 [2024-11-19 10:03:03.972814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.304 [2024-11-19 10:03:03.972826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.304 [2024-11-19 10:03:03.972835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.304 [2024-11-19 10:03:03.972851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.304 [2024-11-19 10:03:03.972861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.304 [2024-11-19 10:03:03.972872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.304 [2024-11-19 10:03:03.972881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.304 [2024-11-19 10:03:03.972892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.304 [2024-11-19 10:03:03.972901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.304 [2024-11-19 10:03:03.972941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.304 [2024-11-19 10:03:03.972954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.304 [2024-11-19 10:03:03.972965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.304 [2024-11-19 10:03:03.972975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.972986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.305 [2024-11-19 10:03:03.973834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.305 [2024-11-19 10:03:03.973844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.306 [2024-11-19 10:03:03.973855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.306 [2024-11-19 10:03:03.973865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.306 [2024-11-19 10:03:03.973876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.306 [2024-11-19 10:03:03.973886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.306 [2024-11-19 10:03:03.973897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.306 [2024-11-19 10:03:03.973906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.306 [2024-11-19 10:03:03.973930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.306 [2024-11-19 10:03:03.973942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.306 [2024-11-19 10:03:03.973954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.306 [2024-11-19 10:03:03.973963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.306 [2024-11-19 10:03:03.973975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.306 [2024-11-19 10:03:03.973984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.306 [2024-11-19 10:03:03.974002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.306 [2024-11-19 10:03:03.974012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.306 [2024-11-19 10:03:03.974024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.306 [2024-11-19 10:03:03.974033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.306 [2024-11-19 10:03:03.974044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.306 [2024-11-19 10:03:03.974055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.306 [2024-11-19 10:03:03.974067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.306 [2024-11-19 10:03:03.974081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.306 [2024-11-19 10:03:03.974093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.306 [2024-11-19 10:03:03.974102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.306 [2024-11-19 10:03:03.974114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21462d0 is same with the state(6) to be set 00:08:50.306 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.306 [2024-11-19 10:03:03.975391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:50.306 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:50.306 task offset: 0 on job bdev=Nvme0n1 fails 00:08:50.306 00:08:50.306 Latency(us) 00:08:50.306 [2024-11-19T10:03:04.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.306 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:50.306 Job: Nvme0n1 ended in about 0.73 seconds with error 00:08:50.306 Verification LBA range: start 0x0 length 0x400 00:08:50.306 Nvme0n1 : 0.73 1400.16 87.51 87.51 0.00 41839.03 3187.43 44802.79 00:08:50.306 [2024-11-19T10:03:04.195Z] =================================================================================================================== 00:08:50.306 [2024-11-19T10:03:04.195Z] Total : 1400.16 87.51 87.51 0.00 41839.03 3187.43 44802.79 00:08:50.306 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.306 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:50.306 [2024-11-19 10:03:03.977479] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:50.306 [2024-11-19 10:03:03.977506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214bce0 (9): Bad file descriptor 00:08:50.306 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.306 10:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:50.306 [2024-11-19 10:03:03.990173] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:51.245 10:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62372 00:08:51.245 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62372) - No such process 00:08:51.245 10:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:51.245 10:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:51.245 10:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:51.245 10:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:51.245 10:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:51.245 10:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:51.245 10:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:51.245 10:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:51.245 { 00:08:51.245 "params": { 00:08:51.245 "name": "Nvme$subsystem", 00:08:51.245 "trtype": "$TEST_TRANSPORT", 00:08:51.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:51.245 "adrfam": "ipv4", 00:08:51.245 "trsvcid": "$NVMF_PORT", 00:08:51.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:51.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:51.245 "hdgst": ${hdgst:-false}, 00:08:51.245 "ddgst": ${ddgst:-false} 00:08:51.245 }, 00:08:51.245 "method": "bdev_nvme_attach_controller" 00:08:51.245 } 00:08:51.245 EOF 00:08:51.245 )") 00:08:51.245 10:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:51.245 10:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:51.245 10:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:51.245 10:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:51.245 "params": { 00:08:51.245 "name": "Nvme0", 00:08:51.245 "trtype": "tcp", 00:08:51.245 "traddr": "10.0.0.3", 00:08:51.246 "adrfam": "ipv4", 00:08:51.246 "trsvcid": "4420", 00:08:51.246 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:51.246 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:51.246 "hdgst": false, 00:08:51.246 "ddgst": false 00:08:51.246 }, 00:08:51.246 "method": "bdev_nvme_attach_controller" 00:08:51.246 }' 00:08:51.246 [2024-11-19 10:03:05.052983] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:51.246 [2024-11-19 10:03:05.053079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62416 ] 00:08:51.506 [2024-11-19 10:03:05.205304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.506 [2024-11-19 10:03:05.268423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.506 [2024-11-19 10:03:05.347736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.765 Running I/O for 1 seconds... 00:08:52.699 1472.00 IOPS, 92.00 MiB/s 00:08:52.699 Latency(us) 00:08:52.699 [2024-11-19T10:03:06.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.699 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:52.699 Verification LBA range: start 0x0 length 0x400 00:08:52.699 Nvme0n1 : 1.04 1476.11 92.26 0.00 0.00 42500.37 6017.40 40513.16 00:08:52.699 [2024-11-19T10:03:06.588Z] =================================================================================================================== 00:08:52.699 [2024-11-19T10:03:06.588Z] Total : 1476.11 92.26 0.00 0.00 42500.37 6017.40 40513.16 00:08:52.958 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:52.958 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:52.958 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:52.958 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:52.958 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:52.958 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:52.958 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:52.958 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:52.958 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:52.958 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:52.958 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:52.958 rmmod nvme_tcp 00:08:52.958 rmmod nvme_fabrics 00:08:52.958 rmmod nvme_keyring 00:08:52.958 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:52.958 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:52.958 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:52.958 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62326 ']' 00:08:52.958 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62326 00:08:52.958 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62326 ']' 00:08:52.958 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62326 00:08:52.958 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:52.958 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:52.958 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62326 00:08:53.217 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:53.217 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:53.217 killing process with pid 62326 00:08:53.217 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62326' 00:08:53.217 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62326 00:08:53.217 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62326 00:08:53.217 [2024-11-19 10:03:07.068309] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:53.217 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:53.217 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:53.217 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:53.217 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:53.217 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:53.217 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:53.217 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:53.217 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:53.217 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:53.217 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:53.477 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:53.477 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:53.477 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:53.477 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:53.477 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:53.477 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:53.477 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:53.477 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:53.477 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:53.477 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:53.477 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:53.477 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:53.477 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:53.477 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.477 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.477 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.741 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:08:53.741 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:53.741 00:08:53.741 real 0m5.951s 00:08:53.741 user 0m21.263s 00:08:53.741 sys 0m1.717s 00:08:53.741 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.741 ************************************ 00:08:53.741 END TEST nvmf_host_management 00:08:53.741 ************************************ 00:08:53.742 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.742 10:03:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:53.742 10:03:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:53.742 10:03:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.742 10:03:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.742 ************************************ 00:08:53.742 START TEST nvmf_lvol 00:08:53.742 ************************************ 00:08:53.742 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:53.742 * Looking for test storage... 00:08:53.742 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:53.742 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:53.742 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:53.742 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:53.742 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:53.742 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.742 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.742 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.742 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.742 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.742 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.742 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.742 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.742 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.742 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.743 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.743 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:53.743 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:53.743 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.743 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.743 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:53.743 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:53.743 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.743 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:53.743 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.743 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:53.743 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:53.743 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.743 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:53.743 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.743 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.743 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.743 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:53.743 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.743 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:53.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.743 --rc genhtml_branch_coverage=1 00:08:53.743 --rc genhtml_function_coverage=1 00:08:53.743 --rc genhtml_legend=1 00:08:53.744 --rc geninfo_all_blocks=1 00:08:53.744 --rc geninfo_unexecuted_blocks=1 00:08:53.744 00:08:53.744 ' 00:08:53.744 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:53.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.744 --rc genhtml_branch_coverage=1 00:08:53.744 --rc genhtml_function_coverage=1 00:08:53.744 --rc genhtml_legend=1 00:08:53.744 --rc geninfo_all_blocks=1 00:08:53.744 --rc geninfo_unexecuted_blocks=1 00:08:53.744 00:08:53.744 ' 00:08:53.744 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:53.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.744 --rc genhtml_branch_coverage=1 00:08:53.744 --rc genhtml_function_coverage=1 00:08:53.744 --rc genhtml_legend=1 00:08:53.744 --rc geninfo_all_blocks=1 00:08:53.744 --rc geninfo_unexecuted_blocks=1 00:08:53.744 00:08:53.744 ' 00:08:53.744 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:53.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.744 --rc genhtml_branch_coverage=1 00:08:53.744 --rc genhtml_function_coverage=1 00:08:53.744 --rc genhtml_legend=1 00:08:53.744 --rc geninfo_all_blocks=1 00:08:53.744 --rc geninfo_unexecuted_blocks=1 00:08:53.744 00:08:53.744 ' 00:08:53.744 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:53.744 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:53.744 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.744 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.744 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.744 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.744 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.744 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.744 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.744 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.744 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.744 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:54.003 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:54.003 Cannot find device "nvmf_init_br" 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:54.003 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:54.003 Cannot find device "nvmf_init_br2" 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:54.004 Cannot find device "nvmf_tgt_br" 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:54.004 Cannot find device "nvmf_tgt_br2" 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:54.004 Cannot find device "nvmf_init_br" 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:54.004 Cannot find device "nvmf_init_br2" 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:54.004 Cannot find device "nvmf_tgt_br" 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:54.004 Cannot find device "nvmf_tgt_br2" 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:54.004 Cannot find device "nvmf_br" 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:54.004 Cannot find device "nvmf_init_if" 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:54.004 Cannot find device "nvmf_init_if2" 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:54.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:54.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:54.004 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:54.262 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:54.262 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:54.262 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:54.262 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:54.262 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:54.262 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:54.262 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:54.262 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:54.262 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:54.262 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:54.263 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:54.263 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:54.263 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:54.263 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:54.263 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:54.263 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:54.263 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:08:54.263 00:08:54.263 --- 10.0.0.3 ping statistics --- 00:08:54.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.263 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:54.263 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:54.263 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:54.263 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:08:54.263 00:08:54.263 --- 10.0.0.4 ping statistics --- 00:08:54.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.263 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:54.263 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:54.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:08:54.263 00:08:54.263 --- 10.0.0.1 ping statistics --- 00:08:54.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.263 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:08:54.263 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:54.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:08:54.263 00:08:54.263 --- 10.0.0.2 ping statistics --- 00:08:54.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.263 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:08:54.263 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.263 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:08:54.263 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:54.263 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.263 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:54.263 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:54.263 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.263 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:54.263 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:54.263 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:54.263 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:54.263 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:54.263 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:54.263 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62689 00:08:54.263 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62689 00:08:54.263 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62689 ']' 00:08:54.263 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.263 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.263 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.263 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.263 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:54.263 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:54.263 [2024-11-19 10:03:08.090623] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:54.263 [2024-11-19 10:03:08.090710] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.522 [2024-11-19 10:03:08.243759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:54.522 [2024-11-19 10:03:08.312812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.522 [2024-11-19 10:03:08.312881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.522 [2024-11-19 10:03:08.312895] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.522 [2024-11-19 10:03:08.312906] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.522 [2024-11-19 10:03:08.312939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.522 [2024-11-19 10:03:08.314205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.522 [2024-11-19 10:03:08.314299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.522 [2024-11-19 10:03:08.314305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.522 [2024-11-19 10:03:08.371530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:54.780 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.780 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:54.780 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:54.780 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:54.780 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:54.780 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.780 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:55.039 [2024-11-19 10:03:08.767235] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.039 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:55.298 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:55.298 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:55.863 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:55.863 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:55.863 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:56.429 10:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c8ac4ae7-67b1-426f-acb4-289194a425ed 00:08:56.429 10:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c8ac4ae7-67b1-426f-acb4-289194a425ed lvol 20 00:08:56.687 10:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2286e506-ec94-4bf5-8978-d184b7a12bfa 00:08:56.687 10:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:56.946 10:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2286e506-ec94-4bf5-8978-d184b7a12bfa 00:08:57.205 10:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:57.502 [2024-11-19 10:03:11.180925] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:57.502 10:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:57.764 10:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62757 00:08:57.764 10:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:57.764 10:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:58.698 10:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 2286e506-ec94-4bf5-8978-d184b7a12bfa MY_SNAPSHOT 00:08:58.956 10:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f2558e41-e5ff-4ae3-a919-32c84babe16e 00:08:58.956 10:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 2286e506-ec94-4bf5-8978-d184b7a12bfa 30 00:08:59.214 10:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone f2558e41-e5ff-4ae3-a919-32c84babe16e MY_CLONE 00:08:59.472 10:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=471161d1-03e6-4df8-948c-9919e6bdcbe5 00:08:59.472 10:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 471161d1-03e6-4df8-948c-9919e6bdcbe5 00:09:00.039 10:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62757 00:09:08.204 Initializing NVMe Controllers 00:09:08.204 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:09:08.204 Controller IO queue size 128, less than required. 00:09:08.204 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:08.204 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:08.204 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:08.204 Initialization complete. Launching workers. 00:09:08.204 ======================================================== 00:09:08.204 Latency(us) 00:09:08.204 Device Information : IOPS MiB/s Average min max 00:09:08.204 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10475.32 40.92 12220.76 1450.41 91840.93 00:09:08.204 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10415.13 40.68 12298.80 3127.61 50562.16 00:09:08.204 ======================================================== 00:09:08.204 Total : 20890.45 81.60 12259.67 1450.41 91840.93 00:09:08.204 00:09:08.204 10:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:08.463 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2286e506-ec94-4bf5-8978-d184b7a12bfa 00:09:08.722 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c8ac4ae7-67b1-426f-acb4-289194a425ed 00:09:08.980 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:08.980 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:08.980 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:08.980 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:08.980 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:09.240 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:09.240 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:09.240 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:09.240 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:09.240 rmmod nvme_tcp 00:09:09.240 rmmod nvme_fabrics 00:09:09.240 rmmod nvme_keyring 00:09:09.240 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:09.240 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:09.240 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:09.240 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62689 ']' 00:09:09.240 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62689 00:09:09.240 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62689 ']' 00:09:09.240 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62689 00:09:09.240 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:09.240 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.240 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62689 00:09:09.240 killing process with pid 62689 00:09:09.240 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:09.240 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:09.240 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62689' 00:09:09.240 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62689 00:09:09.240 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62689 00:09:09.498 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:09.498 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:09.498 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:09.498 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:09.498 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:09.498 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:09.498 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:09.498 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:09.498 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:09.498 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:09.498 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:09.498 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:09.498 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.498 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:09.498 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:09.498 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:09.498 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:09.498 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:09.498 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:09.498 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:09.756 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.756 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.756 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:09.756 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.756 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.756 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.756 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:09:09.756 00:09:09.756 real 0m16.049s 00:09:09.756 user 1m6.054s 00:09:09.756 sys 0m4.292s 00:09:09.756 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.756 ************************************ 00:09:09.756 END TEST nvmf_lvol 00:09:09.756 ************************************ 00:09:09.756 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:09.756 10:03:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:09.756 10:03:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:09.756 10:03:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.756 10:03:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.756 ************************************ 00:09:09.756 START TEST nvmf_lvs_grow 00:09:09.756 ************************************ 00:09:09.756 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:09.756 * Looking for test storage... 00:09:09.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.756 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:09.756 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:09:09.756 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:10.015 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:10.015 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.015 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.015 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.015 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.015 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.015 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.015 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.015 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.015 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.015 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.015 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.015 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:10.015 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:10.015 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.015 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.015 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:10.015 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:10.015 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:10.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.016 --rc genhtml_branch_coverage=1 00:09:10.016 --rc genhtml_function_coverage=1 00:09:10.016 --rc genhtml_legend=1 00:09:10.016 --rc geninfo_all_blocks=1 00:09:10.016 --rc geninfo_unexecuted_blocks=1 00:09:10.016 00:09:10.016 ' 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:10.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.016 --rc genhtml_branch_coverage=1 00:09:10.016 --rc genhtml_function_coverage=1 00:09:10.016 --rc genhtml_legend=1 00:09:10.016 --rc geninfo_all_blocks=1 00:09:10.016 --rc geninfo_unexecuted_blocks=1 00:09:10.016 00:09:10.016 ' 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:10.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.016 --rc genhtml_branch_coverage=1 00:09:10.016 --rc genhtml_function_coverage=1 00:09:10.016 --rc genhtml_legend=1 00:09:10.016 --rc geninfo_all_blocks=1 00:09:10.016 --rc geninfo_unexecuted_blocks=1 00:09:10.016 00:09:10.016 ' 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:10.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.016 --rc genhtml_branch_coverage=1 00:09:10.016 --rc genhtml_function_coverage=1 00:09:10.016 --rc genhtml_legend=1 00:09:10.016 --rc geninfo_all_blocks=1 00:09:10.016 --rc geninfo_unexecuted_blocks=1 00:09:10.016 00:09:10.016 ' 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:10.016 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:10.016 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:10.017 Cannot find device "nvmf_init_br" 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:10.017 Cannot find device "nvmf_init_br2" 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:10.017 Cannot find device "nvmf_tgt_br" 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:10.017 Cannot find device "nvmf_tgt_br2" 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:10.017 Cannot find device "nvmf_init_br" 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:10.017 Cannot find device "nvmf_init_br2" 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:10.017 Cannot find device "nvmf_tgt_br" 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:10.017 Cannot find device "nvmf_tgt_br2" 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:10.017 Cannot find device "nvmf_br" 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:10.017 Cannot find device "nvmf_init_if" 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:10.017 Cannot find device "nvmf_init_if2" 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:10.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:10.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:10.017 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:10.275 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:10.275 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:10.275 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:10.275 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:10.275 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:10.275 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:10.275 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:10.275 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:10.275 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:10.275 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:10.275 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:10.275 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:10.275 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:10.275 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:10.275 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:10.275 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:10.275 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:10.275 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:10.275 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:10.275 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:10.275 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:10.276 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:10.276 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:09:10.276 00:09:10.276 --- 10.0.0.3 ping statistics --- 00:09:10.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.276 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:10.276 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:10.276 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:09:10.276 00:09:10.276 --- 10.0.0.4 ping statistics --- 00:09:10.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.276 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:10.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:09:10.276 00:09:10.276 --- 10.0.0.1 ping statistics --- 00:09:10.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.276 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:10.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:09:10.276 00:09:10.276 --- 10.0.0.2 ping statistics --- 00:09:10.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.276 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63143 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63143 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63143 ']' 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.276 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:10.534 [2024-11-19 10:03:24.201228] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:09:10.534 [2024-11-19 10:03:24.201359] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.534 [2024-11-19 10:03:24.356045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.534 [2024-11-19 10:03:24.423167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.534 [2024-11-19 10:03:24.423252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.534 [2024-11-19 10:03:24.423266] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.534 [2024-11-19 10:03:24.423276] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.534 [2024-11-19 10:03:24.423286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.534 [2024-11-19 10:03:24.423742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.793 [2024-11-19 10:03:24.481297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:10.793 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.793 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:10.793 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:10.793 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:10.793 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:10.793 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.793 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:11.051 [2024-11-19 10:03:24.867120] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.051 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:11.051 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:11.051 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.051 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:11.051 ************************************ 00:09:11.051 START TEST lvs_grow_clean 00:09:11.051 ************************************ 00:09:11.051 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:11.051 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:11.051 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:11.051 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:11.051 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:11.052 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:11.052 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:11.052 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:11.052 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:11.052 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:11.619 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:11.619 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:11.878 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e9cfb475-ba0f-4e9b-b0fd-94aa79c0e50a 00:09:11.878 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e9cfb475-ba0f-4e9b-b0fd-94aa79c0e50a 00:09:11.878 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:12.136 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:12.136 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:12.136 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e9cfb475-ba0f-4e9b-b0fd-94aa79c0e50a lvol 150 00:09:12.394 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5430f6ba-4856-4256-91d8-3d3922f65f1b 00:09:12.394 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:12.394 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:12.652 [2024-11-19 10:03:26.452599] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:12.652 [2024-11-19 10:03:26.452691] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:12.652 true 00:09:12.652 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e9cfb475-ba0f-4e9b-b0fd-94aa79c0e50a 00:09:12.652 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:12.911 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:12.911 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:13.478 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5430f6ba-4856-4256-91d8-3d3922f65f1b 00:09:13.478 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:13.737 [2024-11-19 10:03:27.613369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:13.995 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:14.254 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63229 00:09:14.254 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:14.254 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:14.254 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63229 /var/tmp/bdevperf.sock 00:09:14.254 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63229 ']' 00:09:14.254 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:14.254 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:14.254 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:14.254 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.254 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:14.254 [2024-11-19 10:03:28.028437] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:09:14.254 [2024-11-19 10:03:28.028953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63229 ] 00:09:14.513 [2024-11-19 10:03:28.179817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.513 [2024-11-19 10:03:28.248024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.513 [2024-11-19 10:03:28.307890] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:15.459 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.459 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:15.459 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:15.718 Nvme0n1 00:09:15.718 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:15.976 [ 00:09:15.976 { 00:09:15.976 "name": "Nvme0n1", 00:09:15.976 "aliases": [ 00:09:15.976 "5430f6ba-4856-4256-91d8-3d3922f65f1b" 00:09:15.976 ], 00:09:15.976 "product_name": "NVMe disk", 00:09:15.976 "block_size": 4096, 00:09:15.976 "num_blocks": 38912, 00:09:15.976 "uuid": "5430f6ba-4856-4256-91d8-3d3922f65f1b", 00:09:15.976 "numa_id": -1, 00:09:15.976 "assigned_rate_limits": { 00:09:15.976 "rw_ios_per_sec": 0, 00:09:15.976 "rw_mbytes_per_sec": 0, 00:09:15.976 "r_mbytes_per_sec": 0, 00:09:15.976 "w_mbytes_per_sec": 0 00:09:15.976 }, 00:09:15.976 "claimed": false, 00:09:15.976 "zoned": false, 00:09:15.976 "supported_io_types": { 00:09:15.976 "read": true, 00:09:15.976 "write": true, 00:09:15.976 "unmap": true, 00:09:15.976 "flush": true, 00:09:15.976 "reset": true, 00:09:15.976 "nvme_admin": true, 00:09:15.976 "nvme_io": true, 00:09:15.976 "nvme_io_md": false, 00:09:15.976 "write_zeroes": true, 00:09:15.976 "zcopy": false, 00:09:15.976 "get_zone_info": false, 00:09:15.976 "zone_management": false, 00:09:15.976 "zone_append": false, 00:09:15.976 "compare": true, 00:09:15.976 "compare_and_write": true, 00:09:15.976 "abort": true, 00:09:15.976 "seek_hole": false, 00:09:15.976 "seek_data": false, 00:09:15.976 "copy": true, 00:09:15.976 "nvme_iov_md": false 00:09:15.976 }, 00:09:15.976 "memory_domains": [ 00:09:15.976 { 00:09:15.976 "dma_device_id": "system", 00:09:15.976 "dma_device_type": 1 00:09:15.976 } 00:09:15.976 ], 00:09:15.976 "driver_specific": { 00:09:15.976 "nvme": [ 00:09:15.976 { 00:09:15.976 "trid": { 00:09:15.976 "trtype": "TCP", 00:09:15.976 "adrfam": "IPv4", 00:09:15.976 "traddr": "10.0.0.3", 00:09:15.976 "trsvcid": "4420", 00:09:15.976 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:15.976 }, 00:09:15.976 "ctrlr_data": { 00:09:15.976 "cntlid": 1, 00:09:15.976 "vendor_id": "0x8086", 00:09:15.976 "model_number": "SPDK bdev Controller", 00:09:15.976 "serial_number": "SPDK0", 00:09:15.976 "firmware_revision": "25.01", 00:09:15.976 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:15.976 "oacs": { 00:09:15.976 "security": 0, 00:09:15.976 "format": 0, 00:09:15.976 "firmware": 0, 00:09:15.976 "ns_manage": 0 00:09:15.976 }, 00:09:15.976 "multi_ctrlr": true, 00:09:15.976 "ana_reporting": false 00:09:15.976 }, 00:09:15.976 "vs": { 00:09:15.976 "nvme_version": "1.3" 00:09:15.976 }, 00:09:15.976 "ns_data": { 00:09:15.976 "id": 1, 00:09:15.976 "can_share": true 00:09:15.976 } 00:09:15.976 } 00:09:15.976 ], 00:09:15.976 "mp_policy": "active_passive" 00:09:15.976 } 00:09:15.976 } 00:09:15.976 ] 00:09:15.976 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63253 00:09:15.976 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:15.976 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:16.234 Running I/O for 10 seconds... 00:09:17.193 Latency(us) 00:09:17.193 [2024-11-19T10:03:31.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.193 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.193 Nvme0n1 : 1.00 7261.00 28.36 0.00 0.00 0.00 0.00 0.00 00:09:17.193 [2024-11-19T10:03:31.082Z] =================================================================================================================== 00:09:17.193 [2024-11-19T10:03:31.082Z] Total : 7261.00 28.36 0.00 0.00 0.00 0.00 0.00 00:09:17.193 00:09:18.128 10:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e9cfb475-ba0f-4e9b-b0fd-94aa79c0e50a 00:09:18.128 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.129 Nvme0n1 : 2.00 7186.50 28.07 0.00 0.00 0.00 0.00 0.00 00:09:18.129 [2024-11-19T10:03:32.018Z] =================================================================================================================== 00:09:18.129 [2024-11-19T10:03:32.018Z] Total : 7186.50 28.07 0.00 0.00 0.00 0.00 0.00 00:09:18.129 00:09:18.387 true 00:09:18.387 10:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e9cfb475-ba0f-4e9b-b0fd-94aa79c0e50a 00:09:18.387 10:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:18.646 10:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:18.646 10:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:18.646 10:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63253 00:09:19.213 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.213 Nvme0n1 : 3.00 7161.67 27.98 0.00 0.00 0.00 0.00 0.00 00:09:19.213 [2024-11-19T10:03:33.102Z] =================================================================================================================== 00:09:19.213 [2024-11-19T10:03:33.102Z] Total : 7161.67 27.98 0.00 0.00 0.00 0.00 0.00 00:09:19.213 00:09:20.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.149 Nvme0n1 : 4.00 7022.25 27.43 0.00 0.00 0.00 0.00 0.00 00:09:20.149 [2024-11-19T10:03:34.038Z] =================================================================================================================== 00:09:20.149 [2024-11-19T10:03:34.038Z] Total : 7022.25 27.43 0.00 0.00 0.00 0.00 0.00 00:09:20.149 00:09:21.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.085 Nvme0n1 : 5.00 7014.80 27.40 0.00 0.00 0.00 0.00 0.00 00:09:21.085 [2024-11-19T10:03:34.975Z] =================================================================================================================== 00:09:21.086 [2024-11-19T10:03:34.975Z] Total : 7014.80 27.40 0.00 0.00 0.00 0.00 0.00 00:09:21.086 00:09:22.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.023 Nvme0n1 : 6.00 6926.67 27.06 0.00 0.00 0.00 0.00 0.00 00:09:22.023 [2024-11-19T10:03:35.912Z] =================================================================================================================== 00:09:22.023 [2024-11-19T10:03:35.912Z] Total : 6926.67 27.06 0.00 0.00 0.00 0.00 0.00 00:09:22.023 00:09:23.401 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.401 Nvme0n1 : 7.00 6953.14 27.16 0.00 0.00 0.00 0.00 0.00 00:09:23.401 [2024-11-19T10:03:37.290Z] =================================================================================================================== 00:09:23.401 [2024-11-19T10:03:37.290Z] Total : 6953.14 27.16 0.00 0.00 0.00 0.00 0.00 00:09:23.401 00:09:24.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.339 Nvme0n1 : 8.00 6941.25 27.11 0.00 0.00 0.00 0.00 0.00 00:09:24.339 [2024-11-19T10:03:38.228Z] =================================================================================================================== 00:09:24.339 [2024-11-19T10:03:38.228Z] Total : 6941.25 27.11 0.00 0.00 0.00 0.00 0.00 00:09:24.339 00:09:25.275 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.275 Nvme0n1 : 9.00 6946.11 27.13 0.00 0.00 0.00 0.00 0.00 00:09:25.275 [2024-11-19T10:03:39.164Z] =================================================================================================================== 00:09:25.275 [2024-11-19T10:03:39.164Z] Total : 6946.11 27.13 0.00 0.00 0.00 0.00 0.00 00:09:25.275 00:09:26.210 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.210 Nvme0n1 : 10.00 6937.30 27.10 0.00 0.00 0.00 0.00 0.00 00:09:26.210 [2024-11-19T10:03:40.099Z] =================================================================================================================== 00:09:26.210 [2024-11-19T10:03:40.099Z] Total : 6937.30 27.10 0.00 0.00 0.00 0.00 0.00 00:09:26.210 00:09:26.210 00:09:26.210 Latency(us) 00:09:26.210 [2024-11-19T10:03:40.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.210 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.210 Nvme0n1 : 10.01 6943.77 27.12 0.00 0.00 18428.04 5570.56 109147.23 00:09:26.210 [2024-11-19T10:03:40.099Z] =================================================================================================================== 00:09:26.210 [2024-11-19T10:03:40.099Z] Total : 6943.77 27.12 0.00 0.00 18428.04 5570.56 109147.23 00:09:26.210 { 00:09:26.210 "results": [ 00:09:26.210 { 00:09:26.210 "job": "Nvme0n1", 00:09:26.210 "core_mask": "0x2", 00:09:26.210 "workload": "randwrite", 00:09:26.210 "status": "finished", 00:09:26.210 "queue_depth": 128, 00:09:26.210 "io_size": 4096, 00:09:26.210 "runtime": 10.009111, 00:09:26.210 "iops": 6943.773527938695, 00:09:26.210 "mibps": 27.124115343510528, 00:09:26.210 "io_failed": 0, 00:09:26.210 "io_timeout": 0, 00:09:26.210 "avg_latency_us": 18428.03821506819, 00:09:26.210 "min_latency_us": 5570.56, 00:09:26.210 "max_latency_us": 109147.2290909091 00:09:26.210 } 00:09:26.210 ], 00:09:26.210 "core_count": 1 00:09:26.210 } 00:09:26.210 10:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63229 00:09:26.210 10:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63229 ']' 00:09:26.210 10:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63229 00:09:26.210 10:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:26.210 10:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.210 10:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63229 00:09:26.210 10:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:26.210 10:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:26.210 killing process with pid 63229 00:09:26.210 10:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63229' 00:09:26.210 10:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63229 00:09:26.210 Received shutdown signal, test time was about 10.000000 seconds 00:09:26.210 00:09:26.210 Latency(us) 00:09:26.210 [2024-11-19T10:03:40.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.210 [2024-11-19T10:03:40.099Z] =================================================================================================================== 00:09:26.210 [2024-11-19T10:03:40.099Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:26.210 10:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63229 00:09:26.469 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:26.728 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:26.986 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e9cfb475-ba0f-4e9b-b0fd-94aa79c0e50a 00:09:26.986 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:27.245 10:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:27.245 10:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:27.245 10:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:27.503 [2024-11-19 10:03:41.366589] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:27.762 10:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e9cfb475-ba0f-4e9b-b0fd-94aa79c0e50a 00:09:27.762 10:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:27.762 10:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e9cfb475-ba0f-4e9b-b0fd-94aa79c0e50a 00:09:27.762 10:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:27.762 10:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:27.762 10:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:27.762 10:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:27.762 10:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:27.762 10:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:27.762 10:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:27.762 10:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:27.762 10:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e9cfb475-ba0f-4e9b-b0fd-94aa79c0e50a 00:09:28.021 request: 00:09:28.021 { 00:09:28.021 "uuid": "e9cfb475-ba0f-4e9b-b0fd-94aa79c0e50a", 00:09:28.021 "method": "bdev_lvol_get_lvstores", 00:09:28.021 "req_id": 1 00:09:28.021 } 00:09:28.021 Got JSON-RPC error response 00:09:28.021 response: 00:09:28.021 { 00:09:28.021 "code": -19, 00:09:28.021 "message": "No such device" 00:09:28.021 } 00:09:28.021 10:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:28.021 10:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:28.021 10:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:28.021 10:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:28.021 10:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:28.280 aio_bdev 00:09:28.280 10:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5430f6ba-4856-4256-91d8-3d3922f65f1b 00:09:28.280 10:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=5430f6ba-4856-4256-91d8-3d3922f65f1b 00:09:28.280 10:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:28.280 10:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:28.280 10:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:28.280 10:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:28.280 10:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:28.540 10:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5430f6ba-4856-4256-91d8-3d3922f65f1b -t 2000 00:09:28.798 [ 00:09:28.798 { 00:09:28.798 "name": "5430f6ba-4856-4256-91d8-3d3922f65f1b", 00:09:28.798 "aliases": [ 00:09:28.798 "lvs/lvol" 00:09:28.798 ], 00:09:28.798 "product_name": "Logical Volume", 00:09:28.798 "block_size": 4096, 00:09:28.798 "num_blocks": 38912, 00:09:28.798 "uuid": "5430f6ba-4856-4256-91d8-3d3922f65f1b", 00:09:28.798 "assigned_rate_limits": { 00:09:28.798 "rw_ios_per_sec": 0, 00:09:28.798 "rw_mbytes_per_sec": 0, 00:09:28.798 "r_mbytes_per_sec": 0, 00:09:28.798 "w_mbytes_per_sec": 0 00:09:28.798 }, 00:09:28.798 "claimed": false, 00:09:28.799 "zoned": false, 00:09:28.799 "supported_io_types": { 00:09:28.799 "read": true, 00:09:28.799 "write": true, 00:09:28.799 "unmap": true, 00:09:28.799 "flush": false, 00:09:28.799 "reset": true, 00:09:28.799 "nvme_admin": false, 00:09:28.799 "nvme_io": false, 00:09:28.799 "nvme_io_md": false, 00:09:28.799 "write_zeroes": true, 00:09:28.799 "zcopy": false, 00:09:28.799 "get_zone_info": false, 00:09:28.799 "zone_management": false, 00:09:28.799 "zone_append": false, 00:09:28.799 "compare": false, 00:09:28.799 "compare_and_write": false, 00:09:28.799 "abort": false, 00:09:28.799 "seek_hole": true, 00:09:28.799 "seek_data": true, 00:09:28.799 "copy": false, 00:09:28.799 "nvme_iov_md": false 00:09:28.799 }, 00:09:28.799 "driver_specific": { 00:09:28.799 "lvol": { 00:09:28.799 "lvol_store_uuid": "e9cfb475-ba0f-4e9b-b0fd-94aa79c0e50a", 00:09:28.799 "base_bdev": "aio_bdev", 00:09:28.799 "thin_provision": false, 00:09:28.799 "num_allocated_clusters": 38, 00:09:28.799 "snapshot": false, 00:09:28.799 "clone": false, 00:09:28.799 "esnap_clone": false 00:09:28.799 } 00:09:28.799 } 00:09:28.799 } 00:09:28.799 ] 00:09:28.799 10:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:28.799 10:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e9cfb475-ba0f-4e9b-b0fd-94aa79c0e50a 00:09:28.799 10:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:29.057 10:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:29.057 10:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e9cfb475-ba0f-4e9b-b0fd-94aa79c0e50a 00:09:29.058 10:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:29.316 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:29.316 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5430f6ba-4856-4256-91d8-3d3922f65f1b 00:09:29.886 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e9cfb475-ba0f-4e9b-b0fd-94aa79c0e50a 00:09:30.145 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:30.404 10:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:30.972 00:09:30.972 real 0m19.672s 00:09:30.972 user 0m18.572s 00:09:30.972 sys 0m2.853s 00:09:30.972 10:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.972 10:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:30.972 ************************************ 00:09:30.972 END TEST lvs_grow_clean 00:09:30.972 ************************************ 00:09:30.972 10:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:30.972 10:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:30.972 10:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.972 10:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:30.972 ************************************ 00:09:30.972 START TEST lvs_grow_dirty 00:09:30.972 ************************************ 00:09:30.972 10:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:30.972 10:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:30.972 10:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:30.972 10:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:30.972 10:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:30.972 10:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:30.972 10:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:30.973 10:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:30.973 10:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:30.973 10:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:31.232 10:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:31.232 10:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:31.491 10:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0f46e528-f57c-4d79-be3a-b70cde7c9903 00:09:31.491 10:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f46e528-f57c-4d79-be3a-b70cde7c9903 00:09:31.491 10:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:31.750 10:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:31.750 10:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:31.750 10:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0f46e528-f57c-4d79-be3a-b70cde7c9903 lvol 150 00:09:32.009 10:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0a78e52f-3fa1-4a3b-a85d-ad4639f12e35 00:09:32.009 10:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:32.009 10:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:32.268 [2024-11-19 10:03:46.061018] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:32.268 [2024-11-19 10:03:46.061119] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:32.268 true 00:09:32.268 10:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f46e528-f57c-4d79-be3a-b70cde7c9903 00:09:32.268 10:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:32.527 10:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:32.528 10:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:32.787 10:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0a78e52f-3fa1-4a3b-a85d-ad4639f12e35 00:09:33.354 10:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:33.354 [2024-11-19 10:03:47.229627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:33.613 10:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:33.873 10:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63511 00:09:33.873 10:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:33.873 10:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:33.873 10:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63511 /var/tmp/bdevperf.sock 00:09:33.873 10:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63511 ']' 00:09:33.873 10:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:33.873 10:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:33.873 10:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:33.873 10:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.873 10:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:33.873 [2024-11-19 10:03:47.582003] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:09:33.873 [2024-11-19 10:03:47.582137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63511 ] 00:09:33.873 [2024-11-19 10:03:47.740408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.132 [2024-11-19 10:03:47.810962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.132 [2024-11-19 10:03:47.870535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:34.701 10:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.701 10:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:34.701 10:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:35.269 Nvme0n1 00:09:35.269 10:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:35.527 [ 00:09:35.527 { 00:09:35.527 "name": "Nvme0n1", 00:09:35.527 "aliases": [ 00:09:35.527 "0a78e52f-3fa1-4a3b-a85d-ad4639f12e35" 00:09:35.527 ], 00:09:35.527 "product_name": "NVMe disk", 00:09:35.527 "block_size": 4096, 00:09:35.527 "num_blocks": 38912, 00:09:35.527 "uuid": "0a78e52f-3fa1-4a3b-a85d-ad4639f12e35", 00:09:35.527 "numa_id": -1, 00:09:35.527 "assigned_rate_limits": { 00:09:35.527 "rw_ios_per_sec": 0, 00:09:35.527 "rw_mbytes_per_sec": 0, 00:09:35.527 "r_mbytes_per_sec": 0, 00:09:35.527 "w_mbytes_per_sec": 0 00:09:35.527 }, 00:09:35.527 "claimed": false, 00:09:35.527 "zoned": false, 00:09:35.527 "supported_io_types": { 00:09:35.527 "read": true, 00:09:35.527 "write": true, 00:09:35.527 "unmap": true, 00:09:35.527 "flush": true, 00:09:35.527 "reset": true, 00:09:35.527 "nvme_admin": true, 00:09:35.527 "nvme_io": true, 00:09:35.527 "nvme_io_md": false, 00:09:35.527 "write_zeroes": true, 00:09:35.527 "zcopy": false, 00:09:35.527 "get_zone_info": false, 00:09:35.527 "zone_management": false, 00:09:35.528 "zone_append": false, 00:09:35.528 "compare": true, 00:09:35.528 "compare_and_write": true, 00:09:35.528 "abort": true, 00:09:35.528 "seek_hole": false, 00:09:35.528 "seek_data": false, 00:09:35.528 "copy": true, 00:09:35.528 "nvme_iov_md": false 00:09:35.528 }, 00:09:35.528 "memory_domains": [ 00:09:35.528 { 00:09:35.528 "dma_device_id": "system", 00:09:35.528 "dma_device_type": 1 00:09:35.528 } 00:09:35.528 ], 00:09:35.528 "driver_specific": { 00:09:35.528 "nvme": [ 00:09:35.528 { 00:09:35.528 "trid": { 00:09:35.528 "trtype": "TCP", 00:09:35.528 "adrfam": "IPv4", 00:09:35.528 "traddr": "10.0.0.3", 00:09:35.528 "trsvcid": "4420", 00:09:35.528 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:35.528 }, 00:09:35.528 "ctrlr_data": { 00:09:35.528 "cntlid": 1, 00:09:35.528 "vendor_id": "0x8086", 00:09:35.528 "model_number": "SPDK bdev Controller", 00:09:35.528 "serial_number": "SPDK0", 00:09:35.528 "firmware_revision": "25.01", 00:09:35.528 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:35.528 "oacs": { 00:09:35.528 "security": 0, 00:09:35.528 "format": 0, 00:09:35.528 "firmware": 0, 00:09:35.528 "ns_manage": 0 00:09:35.528 }, 00:09:35.528 "multi_ctrlr": true, 00:09:35.528 "ana_reporting": false 00:09:35.528 }, 00:09:35.528 "vs": { 00:09:35.528 "nvme_version": "1.3" 00:09:35.528 }, 00:09:35.528 "ns_data": { 00:09:35.528 "id": 1, 00:09:35.528 "can_share": true 00:09:35.528 } 00:09:35.528 } 00:09:35.528 ], 00:09:35.528 "mp_policy": "active_passive" 00:09:35.528 } 00:09:35.528 } 00:09:35.528 ] 00:09:35.528 10:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:35.528 10:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63540 00:09:35.528 10:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:35.528 Running I/O for 10 seconds... 00:09:36.465 Latency(us) 00:09:36.465 [2024-11-19T10:03:50.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.465 Nvme0n1 : 1.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:36.465 [2024-11-19T10:03:50.354Z] =================================================================================================================== 00:09:36.465 [2024-11-19T10:03:50.354Z] Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:36.465 00:09:37.444 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0f46e528-f57c-4d79-be3a-b70cde7c9903 00:09:37.703 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.703 Nvme0n1 : 2.00 7048.50 27.53 0.00 0.00 0.00 0.00 0.00 00:09:37.703 [2024-11-19T10:03:51.592Z] =================================================================================================================== 00:09:37.703 [2024-11-19T10:03:51.592Z] Total : 7048.50 27.53 0.00 0.00 0.00 0.00 0.00 00:09:37.703 00:09:37.962 true 00:09:37.962 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:37.962 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f46e528-f57c-4d79-be3a-b70cde7c9903 00:09:38.220 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:38.220 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:38.220 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63540 00:09:38.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.479 Nvme0n1 : 3.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:38.479 [2024-11-19T10:03:52.368Z] =================================================================================================================== 00:09:38.479 [2024-11-19T10:03:52.368Z] Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:38.479 00:09:39.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.858 Nvme0n1 : 4.00 6953.25 27.16 0.00 0.00 0.00 0.00 0.00 00:09:39.858 [2024-11-19T10:03:53.747Z] =================================================================================================================== 00:09:39.858 [2024-11-19T10:03:53.747Z] Total : 6953.25 27.16 0.00 0.00 0.00 0.00 0.00 00:09:39.858 00:09:40.794 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.794 Nvme0n1 : 5.00 6883.40 26.89 0.00 0.00 0.00 0.00 0.00 00:09:40.794 [2024-11-19T10:03:54.683Z] =================================================================================================================== 00:09:40.794 [2024-11-19T10:03:54.683Z] Total : 6883.40 26.89 0.00 0.00 0.00 0.00 0.00 00:09:40.794 00:09:41.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.729 Nvme0n1 : 6.00 6794.83 26.54 0.00 0.00 0.00 0.00 0.00 00:09:41.729 [2024-11-19T10:03:55.618Z] =================================================================================================================== 00:09:41.729 [2024-11-19T10:03:55.618Z] Total : 6794.83 26.54 0.00 0.00 0.00 0.00 0.00 00:09:41.729 00:09:42.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.665 Nvme0n1 : 7.00 6803.86 26.58 0.00 0.00 0.00 0.00 0.00 00:09:42.665 [2024-11-19T10:03:56.554Z] =================================================================================================================== 00:09:42.665 [2024-11-19T10:03:56.554Z] Total : 6803.86 26.58 0.00 0.00 0.00 0.00 0.00 00:09:42.665 00:09:43.601 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.601 Nvme0n1 : 8.00 6810.62 26.60 0.00 0.00 0.00 0.00 0.00 00:09:43.601 [2024-11-19T10:03:57.490Z] =================================================================================================================== 00:09:43.601 [2024-11-19T10:03:57.490Z] Total : 6810.62 26.60 0.00 0.00 0.00 0.00 0.00 00:09:43.601 00:09:44.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.544 Nvme0n1 : 9.00 6787.67 26.51 0.00 0.00 0.00 0.00 0.00 00:09:44.544 [2024-11-19T10:03:58.433Z] =================================================================================================================== 00:09:44.544 [2024-11-19T10:03:58.433Z] Total : 6787.67 26.51 0.00 0.00 0.00 0.00 0.00 00:09:44.544 00:09:45.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.489 Nvme0n1 : 10.00 6769.30 26.44 0.00 0.00 0.00 0.00 0.00 00:09:45.489 [2024-11-19T10:03:59.378Z] =================================================================================================================== 00:09:45.489 [2024-11-19T10:03:59.378Z] Total : 6769.30 26.44 0.00 0.00 0.00 0.00 0.00 00:09:45.489 00:09:45.489 00:09:45.489 Latency(us) 00:09:45.489 [2024-11-19T10:03:59.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.489 Nvme0n1 : 10.01 6777.60 26.47 0.00 0.00 18880.57 8936.73 125829.12 00:09:45.489 [2024-11-19T10:03:59.378Z] =================================================================================================================== 00:09:45.489 [2024-11-19T10:03:59.378Z] Total : 6777.60 26.47 0.00 0.00 18880.57 8936.73 125829.12 00:09:45.489 { 00:09:45.489 "results": [ 00:09:45.489 { 00:09:45.489 "job": "Nvme0n1", 00:09:45.489 "core_mask": "0x2", 00:09:45.489 "workload": "randwrite", 00:09:45.489 "status": "finished", 00:09:45.489 "queue_depth": 128, 00:09:45.489 "io_size": 4096, 00:09:45.489 "runtime": 10.006645, 00:09:45.489 "iops": 6777.596287267111, 00:09:45.489 "mibps": 26.47498549713715, 00:09:45.489 "io_failed": 0, 00:09:45.489 "io_timeout": 0, 00:09:45.489 "avg_latency_us": 18880.57011850714, 00:09:45.489 "min_latency_us": 8936.727272727272, 00:09:45.489 "max_latency_us": 125829.12 00:09:45.489 } 00:09:45.489 ], 00:09:45.489 "core_count": 1 00:09:45.489 } 00:09:45.490 10:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63511 00:09:45.490 10:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63511 ']' 00:09:45.490 10:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63511 00:09:45.490 10:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:45.490 10:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.490 10:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63511 00:09:45.751 10:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:45.751 killing process with pid 63511 00:09:45.751 10:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:45.751 10:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63511' 00:09:45.751 Received shutdown signal, test time was about 10.000000 seconds 00:09:45.751 00:09:45.751 Latency(us) 00:09:45.751 [2024-11-19T10:03:59.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.751 [2024-11-19T10:03:59.640Z] =================================================================================================================== 00:09:45.751 [2024-11-19T10:03:59.640Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:45.751 10:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63511 00:09:45.751 10:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63511 00:09:45.751 10:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:46.318 10:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:46.577 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f46e528-f57c-4d79-be3a-b70cde7c9903 00:09:46.577 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:46.836 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:46.836 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:46.836 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63143 00:09:46.836 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63143 00:09:46.836 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63143 Killed "${NVMF_APP[@]}" "$@" 00:09:46.836 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:46.836 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:46.836 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:46.836 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:46.836 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:46.836 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63673 00:09:46.836 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63673 00:09:46.836 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:46.836 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63673 ']' 00:09:46.836 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.836 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.836 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.836 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.836 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:46.837 [2024-11-19 10:04:00.637761] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:09:46.837 [2024-11-19 10:04:00.637862] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.096 [2024-11-19 10:04:00.783565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.096 [2024-11-19 10:04:00.849029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.096 [2024-11-19 10:04:00.849080] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.096 [2024-11-19 10:04:00.849091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.096 [2024-11-19 10:04:00.849099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.096 [2024-11-19 10:04:00.849107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.096 [2024-11-19 10:04:00.849538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.096 [2024-11-19 10:04:00.903811] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:47.096 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.096 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:47.096 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:47.096 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:47.097 10:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:47.355 10:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.355 10:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:47.614 [2024-11-19 10:04:01.325004] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:47.614 [2024-11-19 10:04:01.325354] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:47.614 [2024-11-19 10:04:01.325551] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:47.614 10:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:47.614 10:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0a78e52f-3fa1-4a3b-a85d-ad4639f12e35 00:09:47.614 10:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0a78e52f-3fa1-4a3b-a85d-ad4639f12e35 00:09:47.614 10:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.614 10:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:47.614 10:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.614 10:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.614 10:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:47.872 10:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a78e52f-3fa1-4a3b-a85d-ad4639f12e35 -t 2000 00:09:48.137 [ 00:09:48.137 { 00:09:48.137 "name": "0a78e52f-3fa1-4a3b-a85d-ad4639f12e35", 00:09:48.137 "aliases": [ 00:09:48.137 "lvs/lvol" 00:09:48.137 ], 00:09:48.137 "product_name": "Logical Volume", 00:09:48.137 "block_size": 4096, 00:09:48.137 "num_blocks": 38912, 00:09:48.137 "uuid": "0a78e52f-3fa1-4a3b-a85d-ad4639f12e35", 00:09:48.137 "assigned_rate_limits": { 00:09:48.137 "rw_ios_per_sec": 0, 00:09:48.137 "rw_mbytes_per_sec": 0, 00:09:48.137 "r_mbytes_per_sec": 0, 00:09:48.137 "w_mbytes_per_sec": 0 00:09:48.137 }, 00:09:48.137 "claimed": false, 00:09:48.137 "zoned": false, 00:09:48.137 "supported_io_types": { 00:09:48.137 "read": true, 00:09:48.137 "write": true, 00:09:48.137 "unmap": true, 00:09:48.137 "flush": false, 00:09:48.137 "reset": true, 00:09:48.137 "nvme_admin": false, 00:09:48.137 "nvme_io": false, 00:09:48.137 "nvme_io_md": false, 00:09:48.137 "write_zeroes": true, 00:09:48.137 "zcopy": false, 00:09:48.137 "get_zone_info": false, 00:09:48.137 "zone_management": false, 00:09:48.137 "zone_append": false, 00:09:48.137 "compare": false, 00:09:48.137 "compare_and_write": false, 00:09:48.137 "abort": false, 00:09:48.137 "seek_hole": true, 00:09:48.137 "seek_data": true, 00:09:48.137 "copy": false, 00:09:48.137 "nvme_iov_md": false 00:09:48.137 }, 00:09:48.137 "driver_specific": { 00:09:48.137 "lvol": { 00:09:48.137 "lvol_store_uuid": "0f46e528-f57c-4d79-be3a-b70cde7c9903", 00:09:48.137 "base_bdev": "aio_bdev", 00:09:48.137 "thin_provision": false, 00:09:48.137 "num_allocated_clusters": 38, 00:09:48.137 "snapshot": false, 00:09:48.137 "clone": false, 00:09:48.137 "esnap_clone": false 00:09:48.137 } 00:09:48.137 } 00:09:48.137 } 00:09:48.137 ] 00:09:48.137 10:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:48.137 10:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:48.137 10:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f46e528-f57c-4d79-be3a-b70cde7c9903 00:09:48.397 10:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:48.397 10:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f46e528-f57c-4d79-be3a-b70cde7c9903 00:09:48.397 10:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:48.657 10:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:48.657 10:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:48.916 [2024-11-19 10:04:02.758648] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:48.916 10:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f46e528-f57c-4d79-be3a-b70cde7c9903 00:09:48.916 10:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:48.916 10:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f46e528-f57c-4d79-be3a-b70cde7c9903 00:09:48.916 10:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:48.916 10:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.916 10:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:48.916 10:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.916 10:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:48.916 10:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.916 10:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:48.916 10:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:48.916 10:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f46e528-f57c-4d79-be3a-b70cde7c9903 00:09:49.485 request: 00:09:49.485 { 00:09:49.485 "uuid": "0f46e528-f57c-4d79-be3a-b70cde7c9903", 00:09:49.485 "method": "bdev_lvol_get_lvstores", 00:09:49.485 "req_id": 1 00:09:49.485 } 00:09:49.485 Got JSON-RPC error response 00:09:49.485 response: 00:09:49.485 { 00:09:49.485 "code": -19, 00:09:49.485 "message": "No such device" 00:09:49.485 } 00:09:49.485 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:49.485 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:49.485 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:49.485 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:49.485 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:49.744 aio_bdev 00:09:49.744 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0a78e52f-3fa1-4a3b-a85d-ad4639f12e35 00:09:49.744 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0a78e52f-3fa1-4a3b-a85d-ad4639f12e35 00:09:49.744 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.744 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:49.744 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.744 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.744 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:50.003 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a78e52f-3fa1-4a3b-a85d-ad4639f12e35 -t 2000 00:09:50.262 [ 00:09:50.262 { 00:09:50.262 "name": "0a78e52f-3fa1-4a3b-a85d-ad4639f12e35", 00:09:50.262 "aliases": [ 00:09:50.262 "lvs/lvol" 00:09:50.262 ], 00:09:50.262 "product_name": "Logical Volume", 00:09:50.262 "block_size": 4096, 00:09:50.262 "num_blocks": 38912, 00:09:50.262 "uuid": "0a78e52f-3fa1-4a3b-a85d-ad4639f12e35", 00:09:50.262 "assigned_rate_limits": { 00:09:50.262 "rw_ios_per_sec": 0, 00:09:50.262 "rw_mbytes_per_sec": 0, 00:09:50.262 "r_mbytes_per_sec": 0, 00:09:50.262 "w_mbytes_per_sec": 0 00:09:50.262 }, 00:09:50.262 "claimed": false, 00:09:50.262 "zoned": false, 00:09:50.262 "supported_io_types": { 00:09:50.262 "read": true, 00:09:50.262 "write": true, 00:09:50.262 "unmap": true, 00:09:50.262 "flush": false, 00:09:50.262 "reset": true, 00:09:50.262 "nvme_admin": false, 00:09:50.262 "nvme_io": false, 00:09:50.262 "nvme_io_md": false, 00:09:50.262 "write_zeroes": true, 00:09:50.262 "zcopy": false, 00:09:50.262 "get_zone_info": false, 00:09:50.262 "zone_management": false, 00:09:50.262 "zone_append": false, 00:09:50.262 "compare": false, 00:09:50.262 "compare_and_write": false, 00:09:50.262 "abort": false, 00:09:50.262 "seek_hole": true, 00:09:50.262 "seek_data": true, 00:09:50.262 "copy": false, 00:09:50.262 "nvme_iov_md": false 00:09:50.262 }, 00:09:50.262 "driver_specific": { 00:09:50.262 "lvol": { 00:09:50.262 "lvol_store_uuid": "0f46e528-f57c-4d79-be3a-b70cde7c9903", 00:09:50.262 "base_bdev": "aio_bdev", 00:09:50.262 "thin_provision": false, 00:09:50.262 "num_allocated_clusters": 38, 00:09:50.262 "snapshot": false, 00:09:50.262 "clone": false, 00:09:50.262 "esnap_clone": false 00:09:50.262 } 00:09:50.262 } 00:09:50.262 } 00:09:50.262 ] 00:09:50.262 10:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:50.262 10:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f46e528-f57c-4d79-be3a-b70cde7c9903 00:09:50.262 10:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:50.521 10:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:50.521 10:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f46e528-f57c-4d79-be3a-b70cde7c9903 00:09:50.521 10:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:50.780 10:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:50.780 10:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0a78e52f-3fa1-4a3b-a85d-ad4639f12e35 00:09:51.347 10:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0f46e528-f57c-4d79-be3a-b70cde7c9903 00:09:51.606 10:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:51.865 10:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:52.432 00:09:52.432 real 0m21.494s 00:09:52.432 user 0m46.052s 00:09:52.432 sys 0m8.179s 00:09:52.432 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.432 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:52.432 ************************************ 00:09:52.432 END TEST lvs_grow_dirty 00:09:52.432 ************************************ 00:09:52.432 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:52.432 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:52.432 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:52.432 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:52.432 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:52.432 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:52.432 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:52.432 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:52.432 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:52.432 nvmf_trace.0 00:09:52.432 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:52.432 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:52.432 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:52.432 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:52.692 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:52.692 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:52.692 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.692 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:52.692 rmmod nvme_tcp 00:09:52.692 rmmod nvme_fabrics 00:09:52.692 rmmod nvme_keyring 00:09:52.692 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.692 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:52.692 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:52.692 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63673 ']' 00:09:52.692 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63673 00:09:52.692 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63673 ']' 00:09:52.692 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63673 00:09:52.692 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:52.692 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.692 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63673 00:09:52.692 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.692 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.692 killing process with pid 63673 00:09:52.692 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63673' 00:09:52.692 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63673 00:09:52.692 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63673 00:09:52.968 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:52.968 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:52.968 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:52.968 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:52.968 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:52.968 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:52.968 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:52.968 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.968 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:52.968 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:52.968 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:52.968 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:52.968 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:52.968 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:52.968 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:52.968 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:52.968 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:52.968 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:53.238 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:53.238 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:53.238 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:53.239 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:53.239 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:53.239 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.239 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.239 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.239 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:09:53.239 00:09:53.239 real 0m43.466s 00:09:53.239 user 1m11.329s 00:09:53.239 sys 0m11.811s 00:09:53.239 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.239 ************************************ 00:09:53.239 END TEST nvmf_lvs_grow 00:09:53.239 ************************************ 00:09:53.239 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:53.239 10:04:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:53.239 10:04:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:53.239 10:04:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.239 10:04:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:53.239 ************************************ 00:09:53.239 START TEST nvmf_bdev_io_wait 00:09:53.239 ************************************ 00:09:53.239 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:53.239 * Looking for test storage... 00:09:53.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.499 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:53.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.499 --rc genhtml_branch_coverage=1 00:09:53.499 --rc genhtml_function_coverage=1 00:09:53.499 --rc genhtml_legend=1 00:09:53.499 --rc geninfo_all_blocks=1 00:09:53.500 --rc geninfo_unexecuted_blocks=1 00:09:53.500 00:09:53.500 ' 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:53.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.500 --rc genhtml_branch_coverage=1 00:09:53.500 --rc genhtml_function_coverage=1 00:09:53.500 --rc genhtml_legend=1 00:09:53.500 --rc geninfo_all_blocks=1 00:09:53.500 --rc geninfo_unexecuted_blocks=1 00:09:53.500 00:09:53.500 ' 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:53.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.500 --rc genhtml_branch_coverage=1 00:09:53.500 --rc genhtml_function_coverage=1 00:09:53.500 --rc genhtml_legend=1 00:09:53.500 --rc geninfo_all_blocks=1 00:09:53.500 --rc geninfo_unexecuted_blocks=1 00:09:53.500 00:09:53.500 ' 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:53.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.500 --rc genhtml_branch_coverage=1 00:09:53.500 --rc genhtml_function_coverage=1 00:09:53.500 --rc genhtml_legend=1 00:09:53.500 --rc geninfo_all_blocks=1 00:09:53.500 --rc geninfo_unexecuted_blocks=1 00:09:53.500 00:09:53.500 ' 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:53.500 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:53.500 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:53.501 Cannot find device "nvmf_init_br" 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:53.501 Cannot find device "nvmf_init_br2" 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:53.501 Cannot find device "nvmf_tgt_br" 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:53.501 Cannot find device "nvmf_tgt_br2" 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:53.501 Cannot find device "nvmf_init_br" 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:53.501 Cannot find device "nvmf_init_br2" 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:53.501 Cannot find device "nvmf_tgt_br" 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:09:53.501 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:53.760 Cannot find device "nvmf_tgt_br2" 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:53.760 Cannot find device "nvmf_br" 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:53.760 Cannot find device "nvmf_init_if" 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:53.760 Cannot find device "nvmf_init_if2" 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:53.760 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:53.760 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:53.760 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:53.761 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:53.761 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:53.761 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:53.761 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:53.761 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:53.761 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:53.761 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:53.761 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:53.761 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:53.761 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:53.761 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:53.761 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:53.761 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:53.761 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:54.020 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:54.020 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:54.020 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:54.020 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:54.020 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:54.020 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:54.020 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:09:54.020 00:09:54.020 --- 10.0.0.3 ping statistics --- 00:09:54.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.020 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:54.020 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:54.020 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:54.020 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.104 ms 00:09:54.020 00:09:54.020 --- 10.0.0.4 ping statistics --- 00:09:54.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.020 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:09:54.020 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:54.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:09:54.020 00:09:54.020 --- 10.0.0.1 ping statistics --- 00:09:54.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.020 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:54.020 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:54.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:09:54.020 00:09:54.020 --- 10.0.0.2 ping statistics --- 00:09:54.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.020 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:09:54.020 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.020 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:09:54.020 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:54.020 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.020 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:54.020 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:54.020 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.021 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:54.021 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:54.021 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:54.021 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:54.021 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:54.021 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.021 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64048 00:09:54.021 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64048 00:09:54.021 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:54.021 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 64048 ']' 00:09:54.021 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.021 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.021 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.021 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.021 10:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.021 [2024-11-19 10:04:07.791879] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:09:54.021 [2024-11-19 10:04:07.791999] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.281 [2024-11-19 10:04:07.946427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:54.281 [2024-11-19 10:04:08.018191] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.281 [2024-11-19 10:04:08.018262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.281 [2024-11-19 10:04:08.018276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.281 [2024-11-19 10:04:08.018286] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.281 [2024-11-19 10:04:08.018296] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.281 [2024-11-19 10:04:08.019596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.281 [2024-11-19 10:04:08.019752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.281 [2024-11-19 10:04:08.019951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.281 [2024-11-19 10:04:08.019954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:54.281 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.281 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:54.281 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:54.281 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:54.281 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.281 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.281 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:54.281 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.281 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.281 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.281 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:54.281 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.281 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.540 [2024-11-19 10:04:08.234024] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.540 [2024-11-19 10:04:08.252136] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.540 Malloc0 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.540 [2024-11-19 10:04:08.324062] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64076 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64078 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:54.540 { 00:09:54.540 "params": { 00:09:54.540 "name": "Nvme$subsystem", 00:09:54.540 "trtype": "$TEST_TRANSPORT", 00:09:54.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:54.540 "adrfam": "ipv4", 00:09:54.540 "trsvcid": "$NVMF_PORT", 00:09:54.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:54.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:54.540 "hdgst": ${hdgst:-false}, 00:09:54.540 "ddgst": ${ddgst:-false} 00:09:54.540 }, 00:09:54.540 "method": "bdev_nvme_attach_controller" 00:09:54.540 } 00:09:54.540 EOF 00:09:54.540 )") 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64081 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:54.540 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:54.540 { 00:09:54.541 "params": { 00:09:54.541 "name": "Nvme$subsystem", 00:09:54.541 "trtype": "$TEST_TRANSPORT", 00:09:54.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:54.541 "adrfam": "ipv4", 00:09:54.541 "trsvcid": "$NVMF_PORT", 00:09:54.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:54.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:54.541 "hdgst": ${hdgst:-false}, 00:09:54.541 "ddgst": ${ddgst:-false} 00:09:54.541 }, 00:09:54.541 "method": "bdev_nvme_attach_controller" 00:09:54.541 } 00:09:54.541 EOF 00:09:54.541 )") 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:54.541 "params": { 00:09:54.541 "name": "Nvme1", 00:09:54.541 "trtype": "tcp", 00:09:54.541 "traddr": "10.0.0.3", 00:09:54.541 "adrfam": "ipv4", 00:09:54.541 "trsvcid": "4420", 00:09:54.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:54.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:54.541 "hdgst": false, 00:09:54.541 "ddgst": false 00:09:54.541 }, 00:09:54.541 "method": "bdev_nvme_attach_controller" 00:09:54.541 }' 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64083 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:54.541 "params": { 00:09:54.541 "name": "Nvme1", 00:09:54.541 "trtype": "tcp", 00:09:54.541 "traddr": "10.0.0.3", 00:09:54.541 "adrfam": "ipv4", 00:09:54.541 "trsvcid": "4420", 00:09:54.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:54.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:54.541 "hdgst": false, 00:09:54.541 "ddgst": false 00:09:54.541 }, 00:09:54.541 "method": "bdev_nvme_attach_controller" 00:09:54.541 }' 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:54.541 { 00:09:54.541 "params": { 00:09:54.541 "name": "Nvme$subsystem", 00:09:54.541 "trtype": "$TEST_TRANSPORT", 00:09:54.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:54.541 "adrfam": "ipv4", 00:09:54.541 "trsvcid": "$NVMF_PORT", 00:09:54.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:54.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:54.541 "hdgst": ${hdgst:-false}, 00:09:54.541 "ddgst": ${ddgst:-false} 00:09:54.541 }, 00:09:54.541 "method": "bdev_nvme_attach_controller" 00:09:54.541 } 00:09:54.541 EOF 00:09:54.541 )") 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:54.541 { 00:09:54.541 "params": { 00:09:54.541 "name": "Nvme$subsystem", 00:09:54.541 "trtype": "$TEST_TRANSPORT", 00:09:54.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:54.541 "adrfam": "ipv4", 00:09:54.541 "trsvcid": "$NVMF_PORT", 00:09:54.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:54.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:54.541 "hdgst": ${hdgst:-false}, 00:09:54.541 "ddgst": ${ddgst:-false} 00:09:54.541 }, 00:09:54.541 "method": "bdev_nvme_attach_controller" 00:09:54.541 } 00:09:54.541 EOF 00:09:54.541 )") 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:54.541 "params": { 00:09:54.541 "name": "Nvme1", 00:09:54.541 "trtype": "tcp", 00:09:54.541 "traddr": "10.0.0.3", 00:09:54.541 "adrfam": "ipv4", 00:09:54.541 "trsvcid": "4420", 00:09:54.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:54.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:54.541 "hdgst": false, 00:09:54.541 "ddgst": false 00:09:54.541 }, 00:09:54.541 "method": "bdev_nvme_attach_controller" 00:09:54.541 }' 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:54.541 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:54.541 "params": { 00:09:54.541 "name": "Nvme1", 00:09:54.541 "trtype": "tcp", 00:09:54.541 "traddr": "10.0.0.3", 00:09:54.541 "adrfam": "ipv4", 00:09:54.541 "trsvcid": "4420", 00:09:54.542 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:54.542 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:54.542 "hdgst": false, 00:09:54.542 "ddgst": false 00:09:54.542 }, 00:09:54.542 "method": "bdev_nvme_attach_controller" 00:09:54.542 }' 00:09:54.542 10:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64076 00:09:54.542 [2024-11-19 10:04:08.411137] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:09:54.542 [2024-11-19 10:04:08.411438] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:54.542 [2024-11-19 10:04:08.421841] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:09:54.542 [2024-11-19 10:04:08.421952] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:54.542 [2024-11-19 10:04:08.422143] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:09:54.542 [2024-11-19 10:04:08.422226] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:54.542 [2024-11-19 10:04:08.422678] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:09:54.542 [2024-11-19 10:04:08.422742] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:54.799 [2024-11-19 10:04:08.674576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.057 [2024-11-19 10:04:08.737009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:55.057 [2024-11-19 10:04:08.751020] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:55.057 [2024-11-19 10:04:08.755700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.057 [2024-11-19 10:04:08.828729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.057 [2024-11-19 10:04:08.829230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:55.057 [2024-11-19 10:04:08.843795] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:55.057 [2024-11-19 10:04:08.888752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:55.057 [2024-11-19 10:04:08.902001] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:55.057 [2024-11-19 10:04:08.943591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.315 Running I/O for 1 seconds... 00:09:55.315 Running I/O for 1 seconds... 00:09:55.315 [2024-11-19 10:04:09.009227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:55.315 Running I/O for 1 seconds... 00:09:55.315 [2024-11-19 10:04:09.022721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:55.315 Running I/O for 1 seconds... 00:09:56.249 7132.00 IOPS, 27.86 MiB/s 00:09:56.249 Latency(us) 00:09:56.249 [2024-11-19T10:04:10.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.249 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:56.249 Nvme1n1 : 1.01 7168.42 28.00 0.00 0.00 17737.12 8162.21 20614.05 00:09:56.249 [2024-11-19T10:04:10.138Z] =================================================================================================================== 00:09:56.249 [2024-11-19T10:04:10.138Z] Total : 7168.42 28.00 0.00 0.00 17737.12 8162.21 20614.05 00:09:56.249 169080.00 IOPS, 660.47 MiB/s 00:09:56.249 Latency(us) 00:09:56.249 [2024-11-19T10:04:10.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.249 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:56.249 Nvme1n1 : 1.00 168739.89 659.14 0.00 0.00 754.66 385.40 1980.97 00:09:56.249 [2024-11-19T10:04:10.138Z] =================================================================================================================== 00:09:56.249 [2024-11-19T10:04:10.138Z] Total : 168739.89 659.14 0.00 0.00 754.66 385.40 1980.97 00:09:56.249 6243.00 IOPS, 24.39 MiB/s 00:09:56.249 Latency(us) 00:09:56.249 [2024-11-19T10:04:10.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.249 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:56.249 Nvme1n1 : 1.02 6301.42 24.61 0.00 0.00 20180.27 11200.70 29908.25 00:09:56.249 [2024-11-19T10:04:10.138Z] =================================================================================================================== 00:09:56.249 [2024-11-19T10:04:10.138Z] Total : 6301.42 24.61 0.00 0.00 20180.27 11200.70 29908.25 00:09:56.508 5454.00 IOPS, 21.30 MiB/s 00:09:56.508 Latency(us) 00:09:56.508 [2024-11-19T10:04:10.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.508 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:56.508 Nvme1n1 : 1.01 5521.95 21.57 0.00 0.00 23032.80 4438.57 39798.23 00:09:56.508 [2024-11-19T10:04:10.397Z] =================================================================================================================== 00:09:56.508 [2024-11-19T10:04:10.397Z] Total : 5521.95 21.57 0.00 0.00 23032.80 4438.57 39798.23 00:09:56.508 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64078 00:09:56.508 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64081 00:09:56.508 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64083 00:09:56.508 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:56.508 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.508 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:56.508 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.508 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:56.508 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:56.508 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:56.508 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:56.508 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:56.508 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:56.508 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:56.508 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:56.508 rmmod nvme_tcp 00:09:56.508 rmmod nvme_fabrics 00:09:56.766 rmmod nvme_keyring 00:09:56.766 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:56.766 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:56.766 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:56.766 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64048 ']' 00:09:56.766 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64048 00:09:56.766 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 64048 ']' 00:09:56.766 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 64048 00:09:56.766 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:56.766 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.766 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64048 00:09:56.766 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.766 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.766 killing process with pid 64048 00:09:56.766 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64048' 00:09:56.766 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 64048 00:09:56.766 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 64048 00:09:57.024 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:57.024 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:57.024 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:57.024 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:57.024 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:57.024 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:57.024 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:57.024 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:57.024 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:57.024 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:57.024 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:57.024 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:57.024 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:57.024 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:57.024 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:57.025 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:57.025 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:57.025 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:57.025 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:57.025 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:57.025 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:57.025 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:57.284 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:57.284 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.284 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.284 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.284 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:09:57.284 00:09:57.284 real 0m3.903s 00:09:57.284 user 0m15.284s 00:09:57.284 sys 0m2.478s 00:09:57.284 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.284 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:57.284 ************************************ 00:09:57.284 END TEST nvmf_bdev_io_wait 00:09:57.284 ************************************ 00:09:57.284 10:04:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:57.284 10:04:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:57.284 10:04:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.284 10:04:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.284 ************************************ 00:09:57.284 START TEST nvmf_queue_depth 00:09:57.284 ************************************ 00:09:57.284 10:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:57.284 * Looking for test storage... 00:09:57.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:57.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.284 --rc genhtml_branch_coverage=1 00:09:57.284 --rc genhtml_function_coverage=1 00:09:57.284 --rc genhtml_legend=1 00:09:57.284 --rc geninfo_all_blocks=1 00:09:57.284 --rc geninfo_unexecuted_blocks=1 00:09:57.284 00:09:57.284 ' 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:57.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.284 --rc genhtml_branch_coverage=1 00:09:57.284 --rc genhtml_function_coverage=1 00:09:57.284 --rc genhtml_legend=1 00:09:57.284 --rc geninfo_all_blocks=1 00:09:57.284 --rc geninfo_unexecuted_blocks=1 00:09:57.284 00:09:57.284 ' 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:57.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.284 --rc genhtml_branch_coverage=1 00:09:57.284 --rc genhtml_function_coverage=1 00:09:57.284 --rc genhtml_legend=1 00:09:57.284 --rc geninfo_all_blocks=1 00:09:57.284 --rc geninfo_unexecuted_blocks=1 00:09:57.284 00:09:57.284 ' 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:57.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.284 --rc genhtml_branch_coverage=1 00:09:57.284 --rc genhtml_function_coverage=1 00:09:57.284 --rc genhtml_legend=1 00:09:57.284 --rc geninfo_all_blocks=1 00:09:57.284 --rc geninfo_unexecuted_blocks=1 00:09:57.284 00:09:57.284 ' 00:09:57.284 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:57.543 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:57.543 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.543 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.543 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.543 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.543 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.543 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.543 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.543 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.543 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.543 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.544 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:57.544 Cannot find device "nvmf_init_br" 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:57.544 Cannot find device "nvmf_init_br2" 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:57.544 Cannot find device "nvmf_tgt_br" 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:57.544 Cannot find device "nvmf_tgt_br2" 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:57.544 Cannot find device "nvmf_init_br" 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:57.544 Cannot find device "nvmf_init_br2" 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:57.544 Cannot find device "nvmf_tgt_br" 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:57.544 Cannot find device "nvmf_tgt_br2" 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:57.544 Cannot find device "nvmf_br" 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:57.544 Cannot find device "nvmf_init_if" 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:57.544 Cannot find device "nvmf_init_if2" 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:57.544 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:09:57.544 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:57.544 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.545 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:09:57.545 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:57.545 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:57.545 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:57.545 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:57.545 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:57.545 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:57.545 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:57.545 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:57.545 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:57.545 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:57.545 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:57.545 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:57.545 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:57.545 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:57.804 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:57.804 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.117 ms 00:09:57.804 00:09:57.804 --- 10.0.0.3 ping statistics --- 00:09:57.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.804 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:57.804 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:57.804 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:09:57.804 00:09:57.804 --- 10.0.0.4 ping statistics --- 00:09:57.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.804 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:57.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:09:57.804 00:09:57.804 --- 10.0.0.1 ping statistics --- 00:09:57.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.804 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:57.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:09:57.804 00:09:57.804 --- 10.0.0.2 ping statistics --- 00:09:57.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.804 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64349 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64349 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64349 ']' 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.804 10:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.804 [2024-11-19 10:04:11.674868] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:09:57.804 [2024-11-19 10:04:11.675332] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.062 [2024-11-19 10:04:11.832164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.062 [2024-11-19 10:04:11.901459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.062 [2024-11-19 10:04:11.901780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.062 [2024-11-19 10:04:11.901981] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.062 [2024-11-19 10:04:11.901999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.062 [2024-11-19 10:04:11.902008] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.062 [2024-11-19 10:04:11.902476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.321 [2024-11-19 10:04:11.959707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.321 [2024-11-19 10:04:12.079832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.321 Malloc0 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.321 [2024-11-19 10:04:12.133606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:58.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.321 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64369 00:09:58.322 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:58.322 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:58.322 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64369 /var/tmp/bdevperf.sock 00:09:58.322 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64369 ']' 00:09:58.322 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:58.322 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.322 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:58.322 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.322 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.322 [2024-11-19 10:04:12.205347] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:09:58.322 [2024-11-19 10:04:12.205857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64369 ] 00:09:58.580 [2024-11-19 10:04:12.358639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.580 [2024-11-19 10:04:12.429376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.840 [2024-11-19 10:04:12.493110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:58.840 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.840 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:58.840 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:58.840 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.840 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.840 NVMe0n1 00:09:58.840 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.840 10:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:59.099 Running I/O for 10 seconds... 00:10:00.969 6144.00 IOPS, 24.00 MiB/s [2024-11-19T10:04:15.793Z] 6666.00 IOPS, 26.04 MiB/s [2024-11-19T10:04:17.178Z] 6935.00 IOPS, 27.09 MiB/s [2024-11-19T10:04:18.113Z] 7169.00 IOPS, 28.00 MiB/s [2024-11-19T10:04:19.048Z] 7308.40 IOPS, 28.55 MiB/s [2024-11-19T10:04:19.982Z] 7469.33 IOPS, 29.18 MiB/s [2024-11-19T10:04:20.916Z] 7604.29 IOPS, 29.70 MiB/s [2024-11-19T10:04:21.850Z] 7671.75 IOPS, 29.97 MiB/s [2024-11-19T10:04:22.784Z] 7731.44 IOPS, 30.20 MiB/s [2024-11-19T10:04:23.043Z] 7767.40 IOPS, 30.34 MiB/s 00:10:09.154 Latency(us) 00:10:09.154 [2024-11-19T10:04:23.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.154 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:09.154 Verification LBA range: start 0x0 length 0x4000 00:10:09.154 NVMe0n1 : 10.08 7780.37 30.39 0.00 0.00 130881.71 26452.71 105334.23 00:10:09.154 [2024-11-19T10:04:23.043Z] =================================================================================================================== 00:10:09.154 [2024-11-19T10:04:23.043Z] Total : 7780.37 30.39 0.00 0.00 130881.71 26452.71 105334.23 00:10:09.154 { 00:10:09.154 "results": [ 00:10:09.154 { 00:10:09.154 "job": "NVMe0n1", 00:10:09.154 "core_mask": "0x1", 00:10:09.154 "workload": "verify", 00:10:09.154 "status": "finished", 00:10:09.154 "verify_range": { 00:10:09.154 "start": 0, 00:10:09.154 "length": 16384 00:10:09.154 }, 00:10:09.155 "queue_depth": 1024, 00:10:09.155 "io_size": 4096, 00:10:09.155 "runtime": 10.084616, 00:10:09.155 "iops": 7780.3656579487015, 00:10:09.155 "mibps": 30.392053351362115, 00:10:09.155 "io_failed": 0, 00:10:09.155 "io_timeout": 0, 00:10:09.155 "avg_latency_us": 130881.71459805673, 00:10:09.155 "min_latency_us": 26452.712727272727, 00:10:09.155 "max_latency_us": 105334.22545454545 00:10:09.155 } 00:10:09.155 ], 00:10:09.155 "core_count": 1 00:10:09.155 } 00:10:09.155 10:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64369 00:10:09.155 10:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64369 ']' 00:10:09.155 10:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64369 00:10:09.155 10:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:09.155 10:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.155 10:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64369 00:10:09.155 killing process with pid 64369 00:10:09.155 Received shutdown signal, test time was about 10.000000 seconds 00:10:09.155 00:10:09.155 Latency(us) 00:10:09.155 [2024-11-19T10:04:23.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.155 [2024-11-19T10:04:23.044Z] =================================================================================================================== 00:10:09.155 [2024-11-19T10:04:23.044Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:09.155 10:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.155 10:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.155 10:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64369' 00:10:09.155 10:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64369 00:10:09.155 10:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64369 00:10:09.413 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:09.413 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:09.413 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:09.413 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:09.413 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:09.413 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:09.413 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:09.413 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:09.413 rmmod nvme_tcp 00:10:09.413 rmmod nvme_fabrics 00:10:09.413 rmmod nvme_keyring 00:10:09.413 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:09.413 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:09.413 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:09.413 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64349 ']' 00:10:09.413 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64349 00:10:09.413 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64349 ']' 00:10:09.413 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64349 00:10:09.413 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:09.413 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.413 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64349 00:10:09.413 killing process with pid 64349 00:10:09.413 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:09.413 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:09.413 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64349' 00:10:09.413 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64349 00:10:09.413 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64349 00:10:09.671 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:09.671 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:09.671 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:09.671 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:09.671 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:09.671 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:09.671 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:09.671 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:09.671 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:09.671 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:09.671 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:09.672 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:09.672 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:09.672 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:09.672 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:09.672 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:09.672 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:09.672 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:09.929 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:09.929 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:09.929 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:09.929 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:09.929 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:09.929 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.929 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.930 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.930 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:10:09.930 00:10:09.930 real 0m12.709s 00:10:09.930 user 0m21.370s 00:10:09.930 sys 0m2.384s 00:10:09.930 ************************************ 00:10:09.930 END TEST nvmf_queue_depth 00:10:09.930 ************************************ 00:10:09.930 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.930 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.930 10:04:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:09.930 10:04:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:09.930 10:04:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.930 10:04:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:09.930 ************************************ 00:10:09.930 START TEST nvmf_target_multipath 00:10:09.930 ************************************ 00:10:09.930 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:10.189 * Looking for test storage... 00:10:10.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:10.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.189 --rc genhtml_branch_coverage=1 00:10:10.189 --rc genhtml_function_coverage=1 00:10:10.189 --rc genhtml_legend=1 00:10:10.189 --rc geninfo_all_blocks=1 00:10:10.189 --rc geninfo_unexecuted_blocks=1 00:10:10.189 00:10:10.189 ' 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:10.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.189 --rc genhtml_branch_coverage=1 00:10:10.189 --rc genhtml_function_coverage=1 00:10:10.189 --rc genhtml_legend=1 00:10:10.189 --rc geninfo_all_blocks=1 00:10:10.189 --rc geninfo_unexecuted_blocks=1 00:10:10.189 00:10:10.189 ' 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:10.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.189 --rc genhtml_branch_coverage=1 00:10:10.189 --rc genhtml_function_coverage=1 00:10:10.189 --rc genhtml_legend=1 00:10:10.189 --rc geninfo_all_blocks=1 00:10:10.189 --rc geninfo_unexecuted_blocks=1 00:10:10.189 00:10:10.189 ' 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:10.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.189 --rc genhtml_branch_coverage=1 00:10:10.189 --rc genhtml_function_coverage=1 00:10:10.189 --rc genhtml_legend=1 00:10:10.189 --rc geninfo_all_blocks=1 00:10:10.189 --rc geninfo_unexecuted_blocks=1 00:10:10.189 00:10:10.189 ' 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.189 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:10.190 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:10.190 Cannot find device "nvmf_init_br" 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:10:10.190 10:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:10.190 Cannot find device "nvmf_init_br2" 00:10:10.190 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:10:10.190 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:10.190 Cannot find device "nvmf_tgt_br" 00:10:10.190 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:10:10.190 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:10.190 Cannot find device "nvmf_tgt_br2" 00:10:10.190 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:10:10.190 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:10.190 Cannot find device "nvmf_init_br" 00:10:10.190 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:10:10.190 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:10.190 Cannot find device "nvmf_init_br2" 00:10:10.190 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:10:10.190 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:10.190 Cannot find device "nvmf_tgt_br" 00:10:10.190 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:10:10.190 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:10.190 Cannot find device "nvmf_tgt_br2" 00:10:10.190 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:10:10.191 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:10.449 Cannot find device "nvmf_br" 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:10.449 Cannot find device "nvmf_init_if" 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:10.449 Cannot find device "nvmf_init_if2" 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:10.449 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:10.449 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:10.449 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:10.708 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:10.708 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:10:10.708 00:10:10.708 --- 10.0.0.3 ping statistics --- 00:10:10.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.708 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:10.708 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:10.708 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:10:10.708 00:10:10.708 --- 10.0.0.4 ping statistics --- 00:10:10.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.708 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:10.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:10.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:10:10.708 00:10:10.708 --- 10.0.0.1 ping statistics --- 00:10:10.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.708 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:10.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:10.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:10:10.708 00:10:10.708 --- 10.0.0.2 ping statistics --- 00:10:10.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.708 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64732 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64732 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 64732 ']' 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.708 10:04:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:10.708 [2024-11-19 10:04:24.459525] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:10:10.708 [2024-11-19 10:04:24.459622] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.966 [2024-11-19 10:04:24.611862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:10.966 [2024-11-19 10:04:24.682624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.966 [2024-11-19 10:04:24.682908] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.966 [2024-11-19 10:04:24.683086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.966 [2024-11-19 10:04:24.683250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.966 [2024-11-19 10:04:24.683300] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.966 [2024-11-19 10:04:24.684697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.966 [2024-11-19 10:04:24.684781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:10.966 [2024-11-19 10:04:24.684932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.966 [2024-11-19 10:04:24.684939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:10.966 [2024-11-19 10:04:24.742179] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:11.900 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.900 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:10:11.900 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:11.900 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:11.900 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:11.900 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.900 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:12.158 [2024-11-19 10:04:25.807498] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:12.158 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:12.416 Malloc0 00:10:12.416 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:12.676 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:12.934 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:13.192 [2024-11-19 10:04:27.055237] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:13.193 10:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:10:13.451 [2024-11-19 10:04:27.323444] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:10:13.710 10:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid=6147973c-080a-4377-b1e7-85172bdc559a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:13.710 10:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid=6147973c-080a-4377-b1e7-85172bdc559a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:10:13.968 10:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:13.968 10:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:10:13.968 10:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:13.968 10:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:13.968 10:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64827 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:15.871 10:04:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:15.871 [global] 00:10:15.871 thread=1 00:10:15.871 invalidate=1 00:10:15.871 rw=randrw 00:10:15.871 time_based=1 00:10:15.871 runtime=6 00:10:15.871 ioengine=libaio 00:10:15.871 direct=1 00:10:15.871 bs=4096 00:10:15.871 iodepth=128 00:10:15.871 norandommap=0 00:10:15.871 numjobs=1 00:10:15.871 00:10:15.871 verify_dump=1 00:10:15.871 verify_backlog=512 00:10:15.871 verify_state_save=0 00:10:15.871 do_verify=1 00:10:15.871 verify=crc32c-intel 00:10:15.871 [job0] 00:10:15.871 filename=/dev/nvme0n1 00:10:15.871 Could not set queue depth (nvme0n1) 00:10:16.130 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:16.130 fio-3.35 00:10:16.130 Starting 1 thread 00:10:17.065 10:04:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:17.323 10:04:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:17.582 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:17.582 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:17.582 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:17.582 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:17.582 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:17.582 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:17.582 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:17.582 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:17.582 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:17.582 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:17.582 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:17.582 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:17.582 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:17.841 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:18.099 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:18.099 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:18.099 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:18.099 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:18.099 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:18.099 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:18.099 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:18.099 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:18.099 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:18.099 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:18.099 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:18.099 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:18.099 10:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64827 00:10:22.288 00:10:22.288 job0: (groupid=0, jobs=1): err= 0: pid=64858: Tue Nov 19 10:04:35 2024 00:10:22.288 read: IOPS=10.2k, BW=40.0MiB/s (42.0MB/s)(240MiB/6006msec) 00:10:22.288 slat (usec): min=2, max=7172, avg=56.37, stdev=216.17 00:10:22.288 clat (usec): min=1665, max=15932, avg=8465.25, stdev=1442.96 00:10:22.288 lat (usec): min=1678, max=15941, avg=8521.62, stdev=1446.55 00:10:22.288 clat percentiles (usec): 00:10:22.288 | 1.00th=[ 4490], 5.00th=[ 6587], 10.00th=[ 7308], 20.00th=[ 7767], 00:10:22.288 | 30.00th=[ 8029], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8455], 00:10:22.288 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9634], 95.00th=[11994], 00:10:22.288 | 99.00th=[13304], 99.50th=[13566], 99.90th=[14222], 99.95th=[14484], 00:10:22.288 | 99.99th=[15008] 00:10:22.288 bw ( KiB/s): min= 8208, max=25288, per=52.05%, avg=21330.00, stdev=6400.05, samples=11 00:10:22.288 iops : min= 2052, max= 6322, avg=5332.45, stdev=1599.98, samples=11 00:10:22.288 write: IOPS=6134, BW=24.0MiB/s (25.1MB/s)(127MiB/5319msec); 0 zone resets 00:10:22.288 slat (usec): min=7, max=2140, avg=66.17, stdev=147.56 00:10:22.288 clat (usec): min=1201, max=14828, avg=7380.12, stdev=1258.88 00:10:22.288 lat (usec): min=1249, max=14850, avg=7446.29, stdev=1263.23 00:10:22.288 clat percentiles (usec): 00:10:22.288 | 1.00th=[ 3523], 5.00th=[ 4490], 10.00th=[ 5932], 20.00th=[ 6915], 00:10:22.288 | 30.00th=[ 7177], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7701], 00:10:22.288 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8455], 95.00th=[ 8717], 00:10:22.288 | 99.00th=[11338], 99.50th=[12125], 99.90th=[13173], 99.95th=[13304], 00:10:22.288 | 99.99th=[14091] 00:10:22.288 bw ( KiB/s): min= 8672, max=25053, per=87.16%, avg=21388.09, stdev=6177.61, samples=11 00:10:22.288 iops : min= 2168, max= 6263, avg=5347.00, stdev=1544.39, samples=11 00:10:22.288 lat (msec) : 2=0.03%, 4=1.20%, 10=92.95%, 20=5.82% 00:10:22.288 cpu : usr=5.88%, sys=24.55%, ctx=5475, majf=0, minf=90 00:10:22.288 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:22.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.288 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.288 issued rwts: total=61533,32630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.288 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.288 00:10:22.288 Run status group 0 (all jobs): 00:10:22.288 READ: bw=40.0MiB/s (42.0MB/s), 40.0MiB/s-40.0MiB/s (42.0MB/s-42.0MB/s), io=240MiB (252MB), run=6006-6006msec 00:10:22.288 WRITE: bw=24.0MiB/s (25.1MB/s), 24.0MiB/s-24.0MiB/s (25.1MB/s-25.1MB/s), io=127MiB (134MB), run=5319-5319msec 00:10:22.288 00:10:22.288 Disk stats (read/write): 00:10:22.288 nvme0n1: ios=60889/31751, merge=0/0, ticks=493068/218817, in_queue=711885, util=98.71% 00:10:22.288 10:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:22.546 10:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:10:22.805 10:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:22.805 10:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:22.806 10:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:22.806 10:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:22.806 10:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:22.806 10:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:22.806 10:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:22.806 10:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:22.806 10:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:22.806 10:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:22.806 10:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:22.806 10:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:22.806 10:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:22.806 10:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=64936 00:10:22.806 10:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:22.806 10:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:23.063 [global] 00:10:23.063 thread=1 00:10:23.063 invalidate=1 00:10:23.063 rw=randrw 00:10:23.063 time_based=1 00:10:23.063 runtime=6 00:10:23.063 ioengine=libaio 00:10:23.063 direct=1 00:10:23.063 bs=4096 00:10:23.063 iodepth=128 00:10:23.063 norandommap=0 00:10:23.063 numjobs=1 00:10:23.063 00:10:23.063 verify_dump=1 00:10:23.063 verify_backlog=512 00:10:23.063 verify_state_save=0 00:10:23.063 do_verify=1 00:10:23.063 verify=crc32c-intel 00:10:23.063 [job0] 00:10:23.063 filename=/dev/nvme0n1 00:10:23.063 Could not set queue depth (nvme0n1) 00:10:23.063 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.063 fio-3.35 00:10:23.063 Starting 1 thread 00:10:24.022 10:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:24.280 10:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:24.845 10:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:24.845 10:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:24.845 10:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:24.845 10:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:24.845 10:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:24.845 10:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:24.845 10:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:24.845 10:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:24.845 10:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:24.845 10:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:24.845 10:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:24.845 10:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:24.845 10:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:25.104 10:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:25.363 10:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:25.363 10:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:25.363 10:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:25.363 10:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:25.363 10:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:25.363 10:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:25.363 10:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:25.364 10:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:25.364 10:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:25.364 10:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:25.364 10:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:25.364 10:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:25.364 10:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 64936 00:10:29.551 00:10:29.551 job0: (groupid=0, jobs=1): err= 0: pid=64957: Tue Nov 19 10:04:42 2024 00:10:29.551 read: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(270MiB/6007msec) 00:10:29.551 slat (usec): min=2, max=7444, avg=42.98, stdev=193.85 00:10:29.551 clat (usec): min=259, max=18889, avg=7601.52, stdev=2305.05 00:10:29.551 lat (usec): min=292, max=18916, avg=7644.51, stdev=2317.73 00:10:29.551 clat percentiles (usec): 00:10:29.551 | 1.00th=[ 1713], 5.00th=[ 3490], 10.00th=[ 4490], 20.00th=[ 5669], 00:10:29.551 | 30.00th=[ 6915], 40.00th=[ 7635], 50.00th=[ 8029], 60.00th=[ 8225], 00:10:29.551 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9896], 95.00th=[11338], 00:10:29.551 | 99.00th=[13566], 99.50th=[14746], 99.90th=[17433], 99.95th=[17957], 00:10:29.551 | 99.99th=[18482] 00:10:29.551 bw ( KiB/s): min=11624, max=36320, per=53.73%, avg=24769.50, stdev=7739.22, samples=12 00:10:29.551 iops : min= 2906, max= 9080, avg=6192.33, stdev=1934.82, samples=12 00:10:29.551 write: IOPS=7075, BW=27.6MiB/s (29.0MB/s)(145MiB/5253msec); 0 zone resets 00:10:29.551 slat (usec): min=12, max=4957, avg=55.47, stdev=115.15 00:10:29.551 clat (usec): min=249, max=18286, avg=6356.62, stdev=2111.77 00:10:29.551 lat (usec): min=275, max=18309, avg=6412.09, stdev=2122.19 00:10:29.551 clat percentiles (usec): 00:10:29.551 | 1.00th=[ 1434], 5.00th=[ 2802], 10.00th=[ 3589], 20.00th=[ 4293], 00:10:29.551 | 30.00th=[ 4948], 40.00th=[ 5997], 50.00th=[ 6915], 60.00th=[ 7373], 00:10:29.551 | 70.00th=[ 7635], 80.00th=[ 7963], 90.00th=[ 8455], 95.00th=[ 9241], 00:10:29.551 | 99.00th=[11469], 99.50th=[12256], 99.90th=[14615], 99.95th=[15795], 00:10:29.551 | 99.99th=[16909] 00:10:29.551 bw ( KiB/s): min=12280, max=36104, per=87.39%, avg=24732.17, stdev=7468.11, samples=12 00:10:29.551 iops : min= 3070, max= 9026, avg=6183.00, stdev=1867.04, samples=12 00:10:29.551 lat (usec) : 250=0.01%, 500=0.05%, 750=0.10%, 1000=0.18% 00:10:29.551 lat (msec) : 2=1.35%, 4=8.11%, 10=83.31%, 20=6.89% 00:10:29.551 cpu : usr=7.06%, sys=29.37%, ctx=6387, majf=0, minf=114 00:10:29.551 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:29.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.551 issued rwts: total=69229,37168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.551 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.551 00:10:29.551 Run status group 0 (all jobs): 00:10:29.551 READ: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=270MiB (284MB), run=6007-6007msec 00:10:29.551 WRITE: bw=27.6MiB/s (29.0MB/s), 27.6MiB/s-27.6MiB/s (29.0MB/s-29.0MB/s), io=145MiB (152MB), run=5253-5253msec 00:10:29.551 00:10:29.551 Disk stats (read/write): 00:10:29.551 nvme0n1: ios=68330/36573, merge=0/0, ticks=486174/209217, in_queue=695391, util=98.63% 00:10:29.551 10:04:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:29.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:29.552 rmmod nvme_tcp 00:10:29.552 rmmod nvme_fabrics 00:10:29.552 rmmod nvme_keyring 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64732 ']' 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64732 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 64732 ']' 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 64732 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:29.552 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64732 00:10:29.810 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:29.810 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:29.810 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64732' 00:10:29.810 killing process with pid 64732 00:10:29.810 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 64732 00:10:29.811 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 64732 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:10:30.069 00:10:30.069 real 0m20.202s 00:10:30.069 user 1m16.009s 00:10:30.069 sys 0m10.024s 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.069 ************************************ 00:10:30.069 END TEST nvmf_target_multipath 00:10:30.069 ************************************ 00:10:30.069 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:30.328 10:04:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:30.328 10:04:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:30.328 10:04:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.328 10:04:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:30.328 ************************************ 00:10:30.328 START TEST nvmf_zcopy 00:10:30.328 ************************************ 00:10:30.328 10:04:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:30.328 * Looking for test storage... 00:10:30.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:30.328 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:30.328 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:30.328 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:30.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.329 --rc genhtml_branch_coverage=1 00:10:30.329 --rc genhtml_function_coverage=1 00:10:30.329 --rc genhtml_legend=1 00:10:30.329 --rc geninfo_all_blocks=1 00:10:30.329 --rc geninfo_unexecuted_blocks=1 00:10:30.329 00:10:30.329 ' 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:30.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.329 --rc genhtml_branch_coverage=1 00:10:30.329 --rc genhtml_function_coverage=1 00:10:30.329 --rc genhtml_legend=1 00:10:30.329 --rc geninfo_all_blocks=1 00:10:30.329 --rc geninfo_unexecuted_blocks=1 00:10:30.329 00:10:30.329 ' 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:30.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.329 --rc genhtml_branch_coverage=1 00:10:30.329 --rc genhtml_function_coverage=1 00:10:30.329 --rc genhtml_legend=1 00:10:30.329 --rc geninfo_all_blocks=1 00:10:30.329 --rc geninfo_unexecuted_blocks=1 00:10:30.329 00:10:30.329 ' 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:30.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.329 --rc genhtml_branch_coverage=1 00:10:30.329 --rc genhtml_function_coverage=1 00:10:30.329 --rc genhtml_legend=1 00:10:30.329 --rc geninfo_all_blocks=1 00:10:30.329 --rc geninfo_unexecuted_blocks=1 00:10:30.329 00:10:30.329 ' 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.329 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.588 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:10:30.588 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:10:30.588 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.588 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.588 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:30.588 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.588 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:30.588 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.588 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.588 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.588 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.588 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.588 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.588 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.588 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:30.588 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.588 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:30.588 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:30.588 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:30.588 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:30.589 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:30.589 Cannot find device "nvmf_init_br" 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:30.589 Cannot find device "nvmf_init_br2" 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:30.589 Cannot find device "nvmf_tgt_br" 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:30.589 Cannot find device "nvmf_tgt_br2" 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:30.589 Cannot find device "nvmf_init_br" 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:30.589 Cannot find device "nvmf_init_br2" 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:30.589 Cannot find device "nvmf_tgt_br" 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:30.589 Cannot find device "nvmf_tgt_br2" 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:30.589 Cannot find device "nvmf_br" 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:30.589 Cannot find device "nvmf_init_if" 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:30.589 Cannot find device "nvmf_init_if2" 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:30.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:30.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:30.589 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:30.848 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:30.848 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:10:30.848 00:10:30.848 --- 10.0.0.3 ping statistics --- 00:10:30.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.848 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:30.848 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:30.848 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:10:30.848 00:10:30.848 --- 10.0.0.4 ping statistics --- 00:10:30.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.848 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:30.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:10:30.848 00:10:30.848 --- 10.0.0.1 ping statistics --- 00:10:30.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.848 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:30.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:10:30.848 00:10:30.848 --- 10.0.0.2 ping statistics --- 00:10:30.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.848 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65263 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65263 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65263 ']' 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.848 10:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:30.848 [2024-11-19 10:04:44.709747] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:10:30.848 [2024-11-19 10:04:44.709870] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.108 [2024-11-19 10:04:44.862149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.108 [2024-11-19 10:04:44.942850] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:31.108 [2024-11-19 10:04:44.942946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:31.108 [2024-11-19 10:04:44.942969] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:31.108 [2024-11-19 10:04:44.942987] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:31.108 [2024-11-19 10:04:44.943002] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:31.108 [2024-11-19 10:04:44.943480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.366 [2024-11-19 10:04:45.006220] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.931 [2024-11-19 10:04:45.788031] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.931 [2024-11-19 10:04:45.804215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.931 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:32.188 malloc0 00:10:32.188 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.188 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:32.188 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.188 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:32.188 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.188 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:32.188 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:32.188 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:32.188 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:32.188 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:32.188 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:32.188 { 00:10:32.189 "params": { 00:10:32.189 "name": "Nvme$subsystem", 00:10:32.189 "trtype": "$TEST_TRANSPORT", 00:10:32.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:32.189 "adrfam": "ipv4", 00:10:32.189 "trsvcid": "$NVMF_PORT", 00:10:32.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:32.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:32.189 "hdgst": ${hdgst:-false}, 00:10:32.189 "ddgst": ${ddgst:-false} 00:10:32.189 }, 00:10:32.189 "method": "bdev_nvme_attach_controller" 00:10:32.189 } 00:10:32.189 EOF 00:10:32.189 )") 00:10:32.189 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:32.189 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:32.189 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:32.189 10:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:32.189 "params": { 00:10:32.189 "name": "Nvme1", 00:10:32.189 "trtype": "tcp", 00:10:32.189 "traddr": "10.0.0.3", 00:10:32.189 "adrfam": "ipv4", 00:10:32.189 "trsvcid": "4420", 00:10:32.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:32.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:32.189 "hdgst": false, 00:10:32.189 "ddgst": false 00:10:32.189 }, 00:10:32.189 "method": "bdev_nvme_attach_controller" 00:10:32.189 }' 00:10:32.189 [2024-11-19 10:04:45.898029] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:10:32.189 [2024-11-19 10:04:45.898127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65296 ] 00:10:32.189 [2024-11-19 10:04:46.043011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.446 [2024-11-19 10:04:46.119661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.446 [2024-11-19 10:04:46.182677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:32.446 Running I/O for 10 seconds... 00:10:34.814 5308.00 IOPS, 41.47 MiB/s [2024-11-19T10:04:49.637Z] 5308.50 IOPS, 41.47 MiB/s [2024-11-19T10:04:50.572Z] 5332.00 IOPS, 41.66 MiB/s [2024-11-19T10:04:51.508Z] 5326.50 IOPS, 41.61 MiB/s [2024-11-19T10:04:52.442Z] 5321.80 IOPS, 41.58 MiB/s [2024-11-19T10:04:53.378Z] 5317.83 IOPS, 41.55 MiB/s [2024-11-19T10:04:54.313Z] 5290.43 IOPS, 41.33 MiB/s [2024-11-19T10:04:55.739Z] 5291.75 IOPS, 41.34 MiB/s [2024-11-19T10:04:56.673Z] 5293.33 IOPS, 41.35 MiB/s [2024-11-19T10:04:56.673Z] 5298.60 IOPS, 41.40 MiB/s 00:10:42.784 Latency(us) 00:10:42.784 [2024-11-19T10:04:56.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:42.784 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:42.784 Verification LBA range: start 0x0 length 0x1000 00:10:42.785 Nvme1n1 : 10.02 5300.25 41.41 0.00 0.00 24075.81 2115.03 33363.78 00:10:42.785 [2024-11-19T10:04:56.674Z] =================================================================================================================== 00:10:42.785 [2024-11-19T10:04:56.674Z] Total : 5300.25 41.41 0.00 0.00 24075.81 2115.03 33363.78 00:10:42.785 10:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65419 00:10:42.785 10:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:42.785 10:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.785 10:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:42.785 10:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:42.785 10:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:42.785 10:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:42.785 10:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:42.785 10:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:42.785 { 00:10:42.785 "params": { 00:10:42.785 "name": "Nvme$subsystem", 00:10:42.785 "trtype": "$TEST_TRANSPORT", 00:10:42.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:42.785 "adrfam": "ipv4", 00:10:42.785 "trsvcid": "$NVMF_PORT", 00:10:42.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:42.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:42.785 "hdgst": ${hdgst:-false}, 00:10:42.785 "ddgst": ${ddgst:-false} 00:10:42.785 }, 00:10:42.785 "method": "bdev_nvme_attach_controller" 00:10:42.785 } 00:10:42.785 EOF 00:10:42.785 )") 00:10:42.785 10:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:42.785 [2024-11-19 10:04:56.530531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.785 [2024-11-19 10:04:56.530757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.785 10:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:42.785 10:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:42.785 10:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:42.785 "params": { 00:10:42.785 "name": "Nvme1", 00:10:42.785 "trtype": "tcp", 00:10:42.785 "traddr": "10.0.0.3", 00:10:42.785 "adrfam": "ipv4", 00:10:42.785 "trsvcid": "4420", 00:10:42.785 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:42.785 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:42.785 "hdgst": false, 00:10:42.785 "ddgst": false 00:10:42.785 }, 00:10:42.785 "method": "bdev_nvme_attach_controller" 00:10:42.785 }' 00:10:42.785 [2024-11-19 10:04:56.542534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.785 [2024-11-19 10:04:56.542588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.785 [2024-11-19 10:04:56.554534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.785 [2024-11-19 10:04:56.554598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.785 [2024-11-19 10:04:56.566533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.785 [2024-11-19 10:04:56.566593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.785 [2024-11-19 10:04:56.578532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.785 [2024-11-19 10:04:56.578587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.785 [2024-11-19 10:04:56.587706] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:10:42.785 [2024-11-19 10:04:56.588109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65419 ] 00:10:42.785 [2024-11-19 10:04:56.590529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.785 [2024-11-19 10:04:56.590571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.785 [2024-11-19 10:04:56.602530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.785 [2024-11-19 10:04:56.602583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.785 [2024-11-19 10:04:56.614532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.785 [2024-11-19 10:04:56.614584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.785 [2024-11-19 10:04:56.626536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.785 [2024-11-19 10:04:56.626589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.785 [2024-11-19 10:04:56.638543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.785 [2024-11-19 10:04:56.638596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.785 [2024-11-19 10:04:56.650543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.785 [2024-11-19 10:04:56.650596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.785 [2024-11-19 10:04:56.658539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.785 [2024-11-19 10:04:56.658586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.785 [2024-11-19 10:04:56.666545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.785 [2024-11-19 10:04:56.666595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.044 [2024-11-19 10:04:56.674544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.044 [2024-11-19 10:04:56.674592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.044 [2024-11-19 10:04:56.686610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.044 [2024-11-19 10:04:56.686676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.044 [2024-11-19 10:04:56.698584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.044 [2024-11-19 10:04:56.698644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.044 [2024-11-19 10:04:56.710577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.044 [2024-11-19 10:04:56.710635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.044 [2024-11-19 10:04:56.722591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.044 [2024-11-19 10:04:56.722659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.044 [2024-11-19 10:04:56.733698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.044 [2024-11-19 10:04:56.734588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.044 [2024-11-19 10:04:56.734631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.044 [2024-11-19 10:04:56.742577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.044 [2024-11-19 10:04:56.742628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.044 [2024-11-19 10:04:56.754594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.044 [2024-11-19 10:04:56.754658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.044 [2024-11-19 10:04:56.766595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.044 [2024-11-19 10:04:56.766655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.044 [2024-11-19 10:04:56.778600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.044 [2024-11-19 10:04:56.778662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.044 [2024-11-19 10:04:56.790627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.044 [2024-11-19 10:04:56.790692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.044 [2024-11-19 10:04:56.798600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.044 [2024-11-19 10:04:56.798651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.044 [2024-11-19 10:04:56.800181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.044 [2024-11-19 10:04:56.810598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.044 [2024-11-19 10:04:56.810653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.044 [2024-11-19 10:04:56.822615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.044 [2024-11-19 10:04:56.822682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.044 [2024-11-19 10:04:56.834626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.044 [2024-11-19 10:04:56.834702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.044 [2024-11-19 10:04:56.846633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.044 [2024-11-19 10:04:56.846699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.044 [2024-11-19 10:04:56.858619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.044 [2024-11-19 10:04:56.858679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.044 [2024-11-19 10:04:56.862323] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:43.044 [2024-11-19 10:04:56.870635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.044 [2024-11-19 10:04:56.870699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.044 [2024-11-19 10:04:56.882636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.044 [2024-11-19 10:04:56.882696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.044 [2024-11-19 10:04:56.894676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.044 [2024-11-19 10:04:56.894752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.044 [2024-11-19 10:04:56.906630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.044 [2024-11-19 10:04:56.906692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.044 [2024-11-19 10:04:56.918689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.044 [2024-11-19 10:04:56.918752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.044 [2024-11-19 10:04:56.930672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.044 [2024-11-19 10:04:56.930738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.303 [2024-11-19 10:04:56.942691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.303 [2024-11-19 10:04:56.942752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.303 [2024-11-19 10:04:56.954742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.303 [2024-11-19 10:04:56.954792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.303 [2024-11-19 10:04:56.966702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.303 [2024-11-19 10:04:56.966758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.303 [2024-11-19 10:04:56.978722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.303 [2024-11-19 10:04:56.979066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.303 Running I/O for 5 seconds... 00:10:43.303 [2024-11-19 10:04:56.990714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.303 [2024-11-19 10:04:56.990975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.303 [2024-11-19 10:04:57.009151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.303 [2024-11-19 10:04:57.009385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.303 [2024-11-19 10:04:57.024802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.303 [2024-11-19 10:04:57.025072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.303 [2024-11-19 10:04:57.035838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.303 [2024-11-19 10:04:57.036178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.303 [2024-11-19 10:04:57.051870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.303 [2024-11-19 10:04:57.052269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.303 [2024-11-19 10:04:57.066932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.303 [2024-11-19 10:04:57.067259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.303 [2024-11-19 10:04:57.084444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.303 [2024-11-19 10:04:57.084787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.303 [2024-11-19 10:04:57.101041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.303 [2024-11-19 10:04:57.101325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.303 [2024-11-19 10:04:57.117044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.303 [2024-11-19 10:04:57.117342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.303 [2024-11-19 10:04:57.133392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.303 [2024-11-19 10:04:57.133466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.303 [2024-11-19 10:04:57.150656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.303 [2024-11-19 10:04:57.150726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.303 [2024-11-19 10:04:57.165332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.303 [2024-11-19 10:04:57.165403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.303 [2024-11-19 10:04:57.181222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.303 [2024-11-19 10:04:57.181300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.562 [2024-11-19 10:04:57.198080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.562 [2024-11-19 10:04:57.198162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.562 [2024-11-19 10:04:57.213603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.562 [2024-11-19 10:04:57.213666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.562 [2024-11-19 10:04:57.229583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.562 [2024-11-19 10:04:57.229655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.562 [2024-11-19 10:04:57.246956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.562 [2024-11-19 10:04:57.247032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.562 [2024-11-19 10:04:57.263398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.562 [2024-11-19 10:04:57.263473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.562 [2024-11-19 10:04:57.280911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.562 [2024-11-19 10:04:57.281010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.562 [2024-11-19 10:04:57.295678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.562 [2024-11-19 10:04:57.295750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.562 [2024-11-19 10:04:57.312046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.562 [2024-11-19 10:04:57.312132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.562 [2024-11-19 10:04:57.328867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.562 [2024-11-19 10:04:57.328962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.562 [2024-11-19 10:04:57.346031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.562 [2024-11-19 10:04:57.346105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.562 [2024-11-19 10:04:57.360450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.562 [2024-11-19 10:04:57.360524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.562 [2024-11-19 10:04:57.376852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.562 [2024-11-19 10:04:57.376932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.562 [2024-11-19 10:04:57.392785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.562 [2024-11-19 10:04:57.393139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.562 [2024-11-19 10:04:57.410078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.562 [2024-11-19 10:04:57.410157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.562 [2024-11-19 10:04:57.427461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.562 [2024-11-19 10:04:57.427539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.562 [2024-11-19 10:04:57.443577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.562 [2024-11-19 10:04:57.443658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.820 [2024-11-19 10:04:57.454417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.820 [2024-11-19 10:04:57.454489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.820 [2024-11-19 10:04:57.470008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.820 [2024-11-19 10:04:57.470081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.820 [2024-11-19 10:04:57.484888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.820 [2024-11-19 10:04:57.484982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.820 [2024-11-19 10:04:57.501046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.820 [2024-11-19 10:04:57.501122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.820 [2024-11-19 10:04:57.511763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.820 [2024-11-19 10:04:57.511825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.820 [2024-11-19 10:04:57.524607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.820 [2024-11-19 10:04:57.524981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.820 [2024-11-19 10:04:57.536782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.820 [2024-11-19 10:04:57.537122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.820 [2024-11-19 10:04:57.549342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.820 [2024-11-19 10:04:57.549418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.820 [2024-11-19 10:04:57.561530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.820 [2024-11-19 10:04:57.561600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.820 [2024-11-19 10:04:57.573535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.820 [2024-11-19 10:04:57.573611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.820 [2024-11-19 10:04:57.585962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.820 [2024-11-19 10:04:57.586034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.820 [2024-11-19 10:04:57.597510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.820 [2024-11-19 10:04:57.597573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.820 [2024-11-19 10:04:57.609956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.820 [2024-11-19 10:04:57.610032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.820 [2024-11-19 10:04:57.622219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.820 [2024-11-19 10:04:57.622298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.820 [2024-11-19 10:04:57.634393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.820 [2024-11-19 10:04:57.634468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.820 [2024-11-19 10:04:57.646813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.820 [2024-11-19 10:04:57.646893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.820 [2024-11-19 10:04:57.662003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.820 [2024-11-19 10:04:57.662082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.820 [2024-11-19 10:04:57.672949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.820 [2024-11-19 10:04:57.673015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.821 [2024-11-19 10:04:57.685890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.821 [2024-11-19 10:04:57.685978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.821 [2024-11-19 10:04:57.698093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.821 [2024-11-19 10:04:57.698165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.081 [2024-11-19 10:04:57.710105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.081 [2024-11-19 10:04:57.710168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.081 [2024-11-19 10:04:57.722091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.081 [2024-11-19 10:04:57.722160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.081 [2024-11-19 10:04:57.734940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.081 [2024-11-19 10:04:57.735009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.081 [2024-11-19 10:04:57.750107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.081 [2024-11-19 10:04:57.750185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.081 [2024-11-19 10:04:57.765958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.081 [2024-11-19 10:04:57.766027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.081 [2024-11-19 10:04:57.782245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.081 [2024-11-19 10:04:57.782322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.081 [2024-11-19 10:04:57.799520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.081 [2024-11-19 10:04:57.799596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.081 [2024-11-19 10:04:57.815617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.081 [2024-11-19 10:04:57.815690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.081 [2024-11-19 10:04:57.832471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.081 [2024-11-19 10:04:57.832782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.081 [2024-11-19 10:04:57.847796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.081 [2024-11-19 10:04:57.848172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.081 [2024-11-19 10:04:57.863651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.081 [2024-11-19 10:04:57.864006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.081 [2024-11-19 10:04:57.874578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.081 [2024-11-19 10:04:57.874642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.081 [2024-11-19 10:04:57.890969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.081 [2024-11-19 10:04:57.891050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.081 [2024-11-19 10:04:57.905566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.081 [2024-11-19 10:04:57.905640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.081 [2024-11-19 10:04:57.921593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.081 [2024-11-19 10:04:57.921672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.081 [2024-11-19 10:04:57.937448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.081 [2024-11-19 10:04:57.937518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.081 [2024-11-19 10:04:57.948076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.081 [2024-11-19 10:04:57.948143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.081 [2024-11-19 10:04:57.963781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.081 [2024-11-19 10:04:57.963853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.340 [2024-11-19 10:04:57.979047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.340 [2024-11-19 10:04:57.979113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.340 10233.00 IOPS, 79.95 MiB/s [2024-11-19T10:04:58.229Z] [2024-11-19 10:04:57.994347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.340 [2024-11-19 10:04:57.994416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.340 [2024-11-19 10:04:58.010195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.340 [2024-11-19 10:04:58.010262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.340 [2024-11-19 10:04:58.020973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.340 [2024-11-19 10:04:58.021040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.340 [2024-11-19 10:04:58.037254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.340 [2024-11-19 10:04:58.037319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.340 [2024-11-19 10:04:58.051910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.340 [2024-11-19 10:04:58.051986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.340 [2024-11-19 10:04:58.067281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.340 [2024-11-19 10:04:58.067613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.340 [2024-11-19 10:04:58.084001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.340 [2024-11-19 10:04:58.084281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.340 [2024-11-19 10:04:58.100771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.340 [2024-11-19 10:04:58.101015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.340 [2024-11-19 10:04:58.115869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.340 [2024-11-19 10:04:58.116144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.340 [2024-11-19 10:04:58.133179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.340 [2024-11-19 10:04:58.133242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.340 [2024-11-19 10:04:58.147468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.340 [2024-11-19 10:04:58.147531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.340 [2024-11-19 10:04:58.163555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.340 [2024-11-19 10:04:58.163627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.340 [2024-11-19 10:04:58.180354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.340 [2024-11-19 10:04:58.180720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.340 [2024-11-19 10:04:58.197066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.340 [2024-11-19 10:04:58.197135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.340 [2024-11-19 10:04:58.214675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.340 [2024-11-19 10:04:58.214759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.599 [2024-11-19 10:04:58.230673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.599 [2024-11-19 10:04:58.230738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.599 [2024-11-19 10:04:58.241729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.600 [2024-11-19 10:04:58.241790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.600 [2024-11-19 10:04:58.254941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.600 [2024-11-19 10:04:58.255033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.600 [2024-11-19 10:04:58.270391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.600 [2024-11-19 10:04:58.270692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.600 [2024-11-19 10:04:58.288424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.600 [2024-11-19 10:04:58.288502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.600 [2024-11-19 10:04:58.304015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.600 [2024-11-19 10:04:58.304109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.600 [2024-11-19 10:04:58.319202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.600 [2024-11-19 10:04:58.319278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.600 [2024-11-19 10:04:58.335572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.600 [2024-11-19 10:04:58.335649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.600 [2024-11-19 10:04:58.352892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.600 [2024-11-19 10:04:58.352981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.600 [2024-11-19 10:04:58.369022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.600 [2024-11-19 10:04:58.369092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.600 [2024-11-19 10:04:58.386352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.600 [2024-11-19 10:04:58.386432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.600 [2024-11-19 10:04:58.402530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.600 [2024-11-19 10:04:58.402603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.600 [2024-11-19 10:04:58.419247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.600 [2024-11-19 10:04:58.419323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.600 [2024-11-19 10:04:58.436050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.600 [2024-11-19 10:04:58.436140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.600 [2024-11-19 10:04:58.452245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.600 [2024-11-19 10:04:58.452321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.600 [2024-11-19 10:04:58.469293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.600 [2024-11-19 10:04:58.469373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.600 [2024-11-19 10:04:58.486470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.600 [2024-11-19 10:04:58.486549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.859 [2024-11-19 10:04:58.501022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.859 [2024-11-19 10:04:58.501100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.859 [2024-11-19 10:04:58.517509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.859 [2024-11-19 10:04:58.517582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.859 [2024-11-19 10:04:58.533371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.859 [2024-11-19 10:04:58.533444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.859 [2024-11-19 10:04:58.543441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.859 [2024-11-19 10:04:58.543771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.859 [2024-11-19 10:04:58.559947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.859 [2024-11-19 10:04:58.560019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.859 [2024-11-19 10:04:58.576564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.859 [2024-11-19 10:04:58.576638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.859 [2024-11-19 10:04:58.593289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.859 [2024-11-19 10:04:58.593363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.859 [2024-11-19 10:04:58.610588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.859 [2024-11-19 10:04:58.610896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.859 [2024-11-19 10:04:58.622119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.859 [2024-11-19 10:04:58.622184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.859 [2024-11-19 10:04:58.635988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.859 [2024-11-19 10:04:58.636060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.859 [2024-11-19 10:04:58.650631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.859 [2024-11-19 10:04:58.650696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.859 [2024-11-19 10:04:58.661205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.859 [2024-11-19 10:04:58.661505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.859 [2024-11-19 10:04:58.676194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.859 [2024-11-19 10:04:58.676506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.859 [2024-11-19 10:04:58.691815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.859 [2024-11-19 10:04:58.692233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.859 [2024-11-19 10:04:58.703149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.859 [2024-11-19 10:04:58.703448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.859 [2024-11-19 10:04:58.719822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.859 [2024-11-19 10:04:58.720169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.859 [2024-11-19 10:04:58.734434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.859 [2024-11-19 10:04:58.734719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.117 [2024-11-19 10:04:58.751100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.118 [2024-11-19 10:04:58.751526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.118 [2024-11-19 10:04:58.763237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.118 [2024-11-19 10:04:58.763511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.118 [2024-11-19 10:04:58.775644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.118 [2024-11-19 10:04:58.775973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.118 [2024-11-19 10:04:58.792210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.118 [2024-11-19 10:04:58.792521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.118 [2024-11-19 10:04:58.803880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.118 [2024-11-19 10:04:58.804213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.118 [2024-11-19 10:04:58.816809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.118 [2024-11-19 10:04:58.816884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.118 [2024-11-19 10:04:58.829027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.118 [2024-11-19 10:04:58.829103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.118 [2024-11-19 10:04:58.841231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.118 [2024-11-19 10:04:58.841301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.118 [2024-11-19 10:04:58.858098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.118 [2024-11-19 10:04:58.858172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.118 [2024-11-19 10:04:58.870010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.118 [2024-11-19 10:04:58.870075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.118 [2024-11-19 10:04:58.882057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.118 [2024-11-19 10:04:58.882124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.118 [2024-11-19 10:04:58.893474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.118 [2024-11-19 10:04:58.893543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.118 [2024-11-19 10:04:58.905247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.118 [2024-11-19 10:04:58.905317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.118 [2024-11-19 10:04:58.921853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.118 [2024-11-19 10:04:58.921949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.118 [2024-11-19 10:04:58.932529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.118 [2024-11-19 10:04:58.932814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.118 [2024-11-19 10:04:58.947835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.118 [2024-11-19 10:04:58.948192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.118 [2024-11-19 10:04:58.959261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.118 [2024-11-19 10:04:58.959561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.118 [2024-11-19 10:04:58.970258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.118 [2024-11-19 10:04:58.970587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.118 [2024-11-19 10:04:58.985257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.118 10268.00 IOPS, 80.22 MiB/s [2024-11-19T10:04:59.007Z] [2024-11-19 10:04:58.985579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.118 [2024-11-19 10:04:58.996113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.118 [2024-11-19 10:04:58.996179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.377 [2024-11-19 10:04:59.013822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.377 [2024-11-19 10:04:59.013908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.377 [2024-11-19 10:04:59.029469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.377 [2024-11-19 10:04:59.029559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.377 [2024-11-19 10:04:59.045907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.377 [2024-11-19 10:04:59.045995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.377 [2024-11-19 10:04:59.061833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.377 [2024-11-19 10:04:59.061908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.377 [2024-11-19 10:04:59.072892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.377 [2024-11-19 10:04:59.072984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.377 [2024-11-19 10:04:59.088966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.377 [2024-11-19 10:04:59.089039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.377 [2024-11-19 10:04:59.104604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.377 [2024-11-19 10:04:59.104679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.377 [2024-11-19 10:04:59.121807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.377 [2024-11-19 10:04:59.121880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.377 [2024-11-19 10:04:59.138311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.377 [2024-11-19 10:04:59.138387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.377 [2024-11-19 10:04:59.155428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.377 [2024-11-19 10:04:59.155769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.377 [2024-11-19 10:04:59.171552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.377 [2024-11-19 10:04:59.171618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.377 [2024-11-19 10:04:59.186687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.377 [2024-11-19 10:04:59.186944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.377 [2024-11-19 10:04:59.202741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.377 [2024-11-19 10:04:59.202979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.377 [2024-11-19 10:04:59.220222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.377 [2024-11-19 10:04:59.220512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.377 [2024-11-19 10:04:59.236381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.377 [2024-11-19 10:04:59.236646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.377 [2024-11-19 10:04:59.252512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.377 [2024-11-19 10:04:59.252710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.636 [2024-11-19 10:04:59.268405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.636 [2024-11-19 10:04:59.268678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.636 [2024-11-19 10:04:59.279233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.636 [2024-11-19 10:04:59.279529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.636 [2024-11-19 10:04:59.295333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.636 [2024-11-19 10:04:59.295656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.636 [2024-11-19 10:04:59.309944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.636 [2024-11-19 10:04:59.310214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.636 [2024-11-19 10:04:59.326246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.636 [2024-11-19 10:04:59.326522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.636 [2024-11-19 10:04:59.342374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.636 [2024-11-19 10:04:59.342645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.636 [2024-11-19 10:04:59.359026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.636 [2024-11-19 10:04:59.359296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.636 [2024-11-19 10:04:59.375376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.636 [2024-11-19 10:04:59.375683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.636 [2024-11-19 10:04:59.392237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.636 [2024-11-19 10:04:59.392560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.636 [2024-11-19 10:04:59.409900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.636 [2024-11-19 10:04:59.410196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.636 [2024-11-19 10:04:59.427152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.636 [2024-11-19 10:04:59.427468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.636 [2024-11-19 10:04:59.443952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.636 [2024-11-19 10:04:59.444248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.636 [2024-11-19 10:04:59.459996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.636 [2024-11-19 10:04:59.460082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.636 [2024-11-19 10:04:59.470760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.636 [2024-11-19 10:04:59.470828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.636 [2024-11-19 10:04:59.486794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.636 [2024-11-19 10:04:59.486872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.636 [2024-11-19 10:04:59.502061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.636 [2024-11-19 10:04:59.502134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.636 [2024-11-19 10:04:59.518178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.636 [2024-11-19 10:04:59.518251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.895 [2024-11-19 10:04:59.535333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.895 [2024-11-19 10:04:59.535409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.895 [2024-11-19 10:04:59.549466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.895 [2024-11-19 10:04:59.549750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.895 [2024-11-19 10:04:59.564645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.895 [2024-11-19 10:04:59.564956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.895 [2024-11-19 10:04:59.580853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.895 [2024-11-19 10:04:59.581203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.895 [2024-11-19 10:04:59.596502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.895 [2024-11-19 10:04:59.596830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.895 [2024-11-19 10:04:59.607680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.895 [2024-11-19 10:04:59.607960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.895 [2024-11-19 10:04:59.623578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.895 [2024-11-19 10:04:59.623896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.895 [2024-11-19 10:04:59.640022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.895 [2024-11-19 10:04:59.640313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.895 [2024-11-19 10:04:59.655463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.895 [2024-11-19 10:04:59.655786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.895 [2024-11-19 10:04:59.670189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.895 [2024-11-19 10:04:59.670497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.895 [2024-11-19 10:04:59.685844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.895 [2024-11-19 10:04:59.686171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.895 [2024-11-19 10:04:59.702158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.895 [2024-11-19 10:04:59.702475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.895 [2024-11-19 10:04:59.718546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.895 [2024-11-19 10:04:59.718872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.895 [2024-11-19 10:04:59.729472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.895 [2024-11-19 10:04:59.729761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.895 [2024-11-19 10:04:59.745289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.895 [2024-11-19 10:04:59.745609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.895 [2024-11-19 10:04:59.761612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.895 [2024-11-19 10:04:59.761678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.895 [2024-11-19 10:04:59.777717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.895 [2024-11-19 10:04:59.777790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.153 [2024-11-19 10:04:59.796653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.153 [2024-11-19 10:04:59.796982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.153 [2024-11-19 10:04:59.813432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.153 [2024-11-19 10:04:59.813502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.153 [2024-11-19 10:04:59.824611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.153 [2024-11-19 10:04:59.824904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.153 [2024-11-19 10:04:59.837597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.153 [2024-11-19 10:04:59.837667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.153 [2024-11-19 10:04:59.849968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.153 [2024-11-19 10:04:59.850038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.153 [2024-11-19 10:04:59.864831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.153 [2024-11-19 10:04:59.865129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.153 [2024-11-19 10:04:59.879880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.153 [2024-11-19 10:04:59.880215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.153 [2024-11-19 10:04:59.890889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.153 [2024-11-19 10:04:59.890960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.153 [2024-11-19 10:04:59.903964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.153 [2024-11-19 10:04:59.904033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.153 [2024-11-19 10:04:59.918571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.153 [2024-11-19 10:04:59.918889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.153 [2024-11-19 10:04:59.929592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.153 [2024-11-19 10:04:59.929898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.153 [2024-11-19 10:04:59.945109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.153 [2024-11-19 10:04:59.945427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.153 [2024-11-19 10:04:59.957500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.153 [2024-11-19 10:04:59.957571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.153 [2024-11-19 10:04:59.973506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.153 [2024-11-19 10:04:59.973586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.153 10249.67 IOPS, 80.08 MiB/s [2024-11-19T10:05:00.042Z] [2024-11-19 10:04:59.990319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.153 [2024-11-19 10:04:59.990382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.153 [2024-11-19 10:04:59.999851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.153 [2024-11-19 10:04:59.999906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.153 [2024-11-19 10:05:00.015721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.153 [2024-11-19 10:05:00.016030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.153 [2024-11-19 10:05:00.026800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.153 [2024-11-19 10:05:00.026856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.153 [2024-11-19 10:05:00.039975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.153 [2024-11-19 10:05:00.040041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.411 [2024-11-19 10:05:00.052187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.411 [2024-11-19 10:05:00.052244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.411 [2024-11-19 10:05:00.068323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.411 [2024-11-19 10:05:00.068383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.411 [2024-11-19 10:05:00.084581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.411 [2024-11-19 10:05:00.084648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.411 [2024-11-19 10:05:00.095724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.411 [2024-11-19 10:05:00.095783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.411 [2024-11-19 10:05:00.108415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.411 [2024-11-19 10:05:00.108481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.411 [2024-11-19 10:05:00.123482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.411 [2024-11-19 10:05:00.123553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.411 [2024-11-19 10:05:00.138185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.411 [2024-11-19 10:05:00.138510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.411 [2024-11-19 10:05:00.153824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.411 [2024-11-19 10:05:00.154134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.411 [2024-11-19 10:05:00.164247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.411 [2024-11-19 10:05:00.164305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.411 [2024-11-19 10:05:00.177369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.411 [2024-11-19 10:05:00.177682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.411 [2024-11-19 10:05:00.190030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.411 [2024-11-19 10:05:00.190102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.411 [2024-11-19 10:05:00.205903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.411 [2024-11-19 10:05:00.205982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.411 [2024-11-19 10:05:00.222065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.411 [2024-11-19 10:05:00.222140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.411 [2024-11-19 10:05:00.238727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.411 [2024-11-19 10:05:00.238796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.411 [2024-11-19 10:05:00.249342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.411 [2024-11-19 10:05:00.249406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.411 [2024-11-19 10:05:00.262590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.411 [2024-11-19 10:05:00.262674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.411 [2024-11-19 10:05:00.275673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.411 [2024-11-19 10:05:00.275730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.411 [2024-11-19 10:05:00.291194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.411 [2024-11-19 10:05:00.291253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.670 [2024-11-19 10:05:00.306394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.670 [2024-11-19 10:05:00.306452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.670 [2024-11-19 10:05:00.316870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.670 [2024-11-19 10:05:00.317109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.670 [2024-11-19 10:05:00.330466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.670 [2024-11-19 10:05:00.330669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.670 [2024-11-19 10:05:00.342985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.670 [2024-11-19 10:05:00.343241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.670 [2024-11-19 10:05:00.359657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.670 [2024-11-19 10:05:00.359865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.670 [2024-11-19 10:05:00.371121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.670 [2024-11-19 10:05:00.371416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.670 [2024-11-19 10:05:00.382938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.670 [2024-11-19 10:05:00.383202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.670 [2024-11-19 10:05:00.395298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.670 [2024-11-19 10:05:00.395570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.670 [2024-11-19 10:05:00.408042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.670 [2024-11-19 10:05:00.408294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.670 [2024-11-19 10:05:00.422528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.670 [2024-11-19 10:05:00.422769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.670 [2024-11-19 10:05:00.438146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.670 [2024-11-19 10:05:00.438424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.670 [2024-11-19 10:05:00.448369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.670 [2024-11-19 10:05:00.448633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.670 [2024-11-19 10:05:00.461221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.670 [2024-11-19 10:05:00.461505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.670 [2024-11-19 10:05:00.472872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.670 [2024-11-19 10:05:00.473121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.670 [2024-11-19 10:05:00.488830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.670 [2024-11-19 10:05:00.489149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.670 [2024-11-19 10:05:00.505583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.670 [2024-11-19 10:05:00.505892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.670 [2024-11-19 10:05:00.517207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.670 [2024-11-19 10:05:00.517275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.670 [2024-11-19 10:05:00.529578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.670 [2024-11-19 10:05:00.529648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.670 [2024-11-19 10:05:00.545900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.670 [2024-11-19 10:05:00.545986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.670 [2024-11-19 10:05:00.557126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.670 [2024-11-19 10:05:00.557186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.928 [2024-11-19 10:05:00.569057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.928 [2024-11-19 10:05:00.569120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.928 [2024-11-19 10:05:00.580989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.928 [2024-11-19 10:05:00.581051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.928 [2024-11-19 10:05:00.597722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.928 [2024-11-19 10:05:00.597793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.929 [2024-11-19 10:05:00.613992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.929 [2024-11-19 10:05:00.614062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.929 [2024-11-19 10:05:00.633247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.929 [2024-11-19 10:05:00.633316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.929 [2024-11-19 10:05:00.647900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.929 [2024-11-19 10:05:00.647980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.929 [2024-11-19 10:05:00.658084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.929 [2024-11-19 10:05:00.658143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.929 [2024-11-19 10:05:00.670987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.929 [2024-11-19 10:05:00.671054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.929 [2024-11-19 10:05:00.686225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.929 [2024-11-19 10:05:00.686523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.929 [2024-11-19 10:05:00.703285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.929 [2024-11-19 10:05:00.703350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.929 [2024-11-19 10:05:00.719446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.929 [2024-11-19 10:05:00.719514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.929 [2024-11-19 10:05:00.730062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.929 [2024-11-19 10:05:00.730117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.929 [2024-11-19 10:05:00.743454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.929 [2024-11-19 10:05:00.743520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.929 [2024-11-19 10:05:00.758359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.929 [2024-11-19 10:05:00.758425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.929 [2024-11-19 10:05:00.775778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.929 [2024-11-19 10:05:00.775851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.929 [2024-11-19 10:05:00.792366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.929 [2024-11-19 10:05:00.792440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.929 [2024-11-19 10:05:00.802455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.929 [2024-11-19 10:05:00.802519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.929 [2024-11-19 10:05:00.815377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.929 [2024-11-19 10:05:00.815450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.187 [2024-11-19 10:05:00.830901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.187 [2024-11-19 10:05:00.830982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.187 [2024-11-19 10:05:00.848051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.187 [2024-11-19 10:05:00.848133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.187 [2024-11-19 10:05:00.858819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.187 [2024-11-19 10:05:00.858886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.187 [2024-11-19 10:05:00.874764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.187 [2024-11-19 10:05:00.874836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.187 [2024-11-19 10:05:00.889787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.187 [2024-11-19 10:05:00.890098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.187 [2024-11-19 10:05:00.905779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.187 [2024-11-19 10:05:00.906083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.187 [2024-11-19 10:05:00.923201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.187 [2024-11-19 10:05:00.923272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.187 [2024-11-19 10:05:00.938794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.187 [2024-11-19 10:05:00.938861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.187 [2024-11-19 10:05:00.948802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.187 [2024-11-19 10:05:00.948869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.187 [2024-11-19 10:05:00.961979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.187 [2024-11-19 10:05:00.962041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.187 [2024-11-19 10:05:00.973903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.187 [2024-11-19 10:05:00.973979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.187 10309.25 IOPS, 80.54 MiB/s [2024-11-19T10:05:01.076Z] [2024-11-19 10:05:00.990165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.187 [2024-11-19 10:05:00.990231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.187 [2024-11-19 10:05:01.007685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.187 [2024-11-19 10:05:01.007754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.187 [2024-11-19 10:05:01.018516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.187 [2024-11-19 10:05:01.018578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.187 [2024-11-19 10:05:01.030852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.187 [2024-11-19 10:05:01.030934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.187 [2024-11-19 10:05:01.046558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.187 [2024-11-19 10:05:01.046630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.187 [2024-11-19 10:05:01.063973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.187 [2024-11-19 10:05:01.064044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.450 [2024-11-19 10:05:01.078751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.450 [2024-11-19 10:05:01.078814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.450 [2024-11-19 10:05:01.094479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.450 [2024-11-19 10:05:01.094553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.450 [2024-11-19 10:05:01.104805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.450 [2024-11-19 10:05:01.104873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.450 [2024-11-19 10:05:01.118118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.450 [2024-11-19 10:05:01.118181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.450 [2024-11-19 10:05:01.130237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.450 [2024-11-19 10:05:01.130299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.450 [2024-11-19 10:05:01.142317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.450 [2024-11-19 10:05:01.142394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.450 [2024-11-19 10:05:01.158090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.450 [2024-11-19 10:05:01.158164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.450 [2024-11-19 10:05:01.173450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.450 [2024-11-19 10:05:01.173774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.450 [2024-11-19 10:05:01.188517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.450 [2024-11-19 10:05:01.188799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.450 [2024-11-19 10:05:01.199805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.450 [2024-11-19 10:05:01.200115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.450 [2024-11-19 10:05:01.211640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.450 [2024-11-19 10:05:01.211892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.450 [2024-11-19 10:05:01.226677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.450 [2024-11-19 10:05:01.227017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.450 [2024-11-19 10:05:01.242724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.450 [2024-11-19 10:05:01.243031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.450 [2024-11-19 10:05:01.260307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.450 [2024-11-19 10:05:01.260623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.450 [2024-11-19 10:05:01.277395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.450 [2024-11-19 10:05:01.277690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.450 [2024-11-19 10:05:01.290719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.450 [2024-11-19 10:05:01.291017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.450 [2024-11-19 10:05:01.307764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.450 [2024-11-19 10:05:01.308113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.450 [2024-11-19 10:05:01.322488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.450 [2024-11-19 10:05:01.322796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.450 [2024-11-19 10:05:01.333011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.450 [2024-11-19 10:05:01.333073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.709 [2024-11-19 10:05:01.345946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.710 [2024-11-19 10:05:01.346021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.710 [2024-11-19 10:05:01.357544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.710 [2024-11-19 10:05:01.357616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.710 [2024-11-19 10:05:01.373938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.710 [2024-11-19 10:05:01.373999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.710 [2024-11-19 10:05:01.390460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.710 [2024-11-19 10:05:01.390513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.710 [2024-11-19 10:05:01.405866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.710 [2024-11-19 10:05:01.406122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.710 [2024-11-19 10:05:01.421122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.710 [2024-11-19 10:05:01.421393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.710 [2024-11-19 10:05:01.437207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.710 [2024-11-19 10:05:01.437438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.710 [2024-11-19 10:05:01.454262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.710 [2024-11-19 10:05:01.454458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.710 [2024-11-19 10:05:01.469280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.710 [2024-11-19 10:05:01.469526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.710 [2024-11-19 10:05:01.480131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.710 [2024-11-19 10:05:01.480189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.710 [2024-11-19 10:05:01.493182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.710 [2024-11-19 10:05:01.493240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.710 [2024-11-19 10:05:01.505085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.710 [2024-11-19 10:05:01.505144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.710 [2024-11-19 10:05:01.521017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.710 [2024-11-19 10:05:01.521083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.710 [2024-11-19 10:05:01.538675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.710 [2024-11-19 10:05:01.538743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.710 [2024-11-19 10:05:01.549964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.710 [2024-11-19 10:05:01.550291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.710 [2024-11-19 10:05:01.565365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.710 [2024-11-19 10:05:01.565439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.710 [2024-11-19 10:05:01.579735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.710 [2024-11-19 10:05:01.580023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.710 [2024-11-19 10:05:01.590745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.710 [2024-11-19 10:05:01.590803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.968 [2024-11-19 10:05:01.603969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.968 [2024-11-19 10:05:01.604039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.968 [2024-11-19 10:05:01.620058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.968 [2024-11-19 10:05:01.620145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.968 [2024-11-19 10:05:01.634761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.968 [2024-11-19 10:05:01.634823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.968 [2024-11-19 10:05:01.644991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.968 [2024-11-19 10:05:01.645048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.968 [2024-11-19 10:05:01.659816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.968 [2024-11-19 10:05:01.659886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.968 [2024-11-19 10:05:01.675134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.968 [2024-11-19 10:05:01.675202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.968 [2024-11-19 10:05:01.690981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.968 [2024-11-19 10:05:01.691047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.968 [2024-11-19 10:05:01.706958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.968 [2024-11-19 10:05:01.707027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.968 [2024-11-19 10:05:01.717114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.968 [2024-11-19 10:05:01.717177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.968 [2024-11-19 10:05:01.732775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.968 [2024-11-19 10:05:01.732865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.968 [2024-11-19 10:05:01.743702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.969 [2024-11-19 10:05:01.743765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.969 [2024-11-19 10:05:01.759106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.969 [2024-11-19 10:05:01.759179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.969 [2024-11-19 10:05:01.775427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.969 [2024-11-19 10:05:01.775487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.969 [2024-11-19 10:05:01.792700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.969 [2024-11-19 10:05:01.792767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.969 [2024-11-19 10:05:01.808956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.969 [2024-11-19 10:05:01.809025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.969 [2024-11-19 10:05:01.819457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.969 [2024-11-19 10:05:01.819515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.969 [2024-11-19 10:05:01.831954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.969 [2024-11-19 10:05:01.832013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.969 [2024-11-19 10:05:01.847903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.969 [2024-11-19 10:05:01.847983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.227 [2024-11-19 10:05:01.864315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.227 [2024-11-19 10:05:01.864379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.227 [2024-11-19 10:05:01.883417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.227 [2024-11-19 10:05:01.883490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.227 [2024-11-19 10:05:01.895505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.227 [2024-11-19 10:05:01.895571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.227 [2024-11-19 10:05:01.911976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.227 [2024-11-19 10:05:01.912045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.227 [2024-11-19 10:05:01.926658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.227 [2024-11-19 10:05:01.927033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.227 [2024-11-19 10:05:01.943179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.227 [2024-11-19 10:05:01.943278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.227 [2024-11-19 10:05:01.959527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.227 [2024-11-19 10:05:01.959623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.227 [2024-11-19 10:05:01.978202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.227 [2024-11-19 10:05:01.978581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.227 10310.80 IOPS, 80.55 MiB/s [2024-11-19T10:05:02.116Z] [2024-11-19 10:05:01.992396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.227 [2024-11-19 10:05:01.992492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.227 [2024-11-19 10:05:02.003100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.227 [2024-11-19 10:05:02.003207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.227 00:10:48.227 Latency(us) 00:10:48.227 [2024-11-19T10:05:02.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.227 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:48.227 Nvme1n1 : 5.02 10302.18 80.49 0.00 0.00 12400.08 4825.83 24188.74 00:10:48.227 [2024-11-19T10:05:02.116Z] =================================================================================================================== 00:10:48.227 [2024-11-19T10:05:02.116Z] Total : 10302.18 80.49 0.00 0.00 12400.08 4825.83 24188.74 00:10:48.227 [2024-11-19 10:05:02.015241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.227 [2024-11-19 10:05:02.015544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.227 [2024-11-19 10:05:02.027139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.227 [2024-11-19 10:05:02.027520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.227 [2024-11-19 10:05:02.039165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.227 [2024-11-19 10:05:02.039564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.227 [2024-11-19 10:05:02.051143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.227 [2024-11-19 10:05:02.051229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.227 [2024-11-19 10:05:02.063126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.227 [2024-11-19 10:05:02.063197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.227 [2024-11-19 10:05:02.075084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.227 [2024-11-19 10:05:02.075144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.227 [2024-11-19 10:05:02.087094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.227 [2024-11-19 10:05:02.087156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.227 [2024-11-19 10:05:02.099118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.227 [2024-11-19 10:05:02.099179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.227 [2024-11-19 10:05:02.111105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.227 [2024-11-19 10:05:02.111164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.486 [2024-11-19 10:05:02.123122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.486 [2024-11-19 10:05:02.123186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.486 [2024-11-19 10:05:02.135103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.486 [2024-11-19 10:05:02.135157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.486 [2024-11-19 10:05:02.147091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.486 [2024-11-19 10:05:02.147141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.486 [2024-11-19 10:05:02.159117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.486 [2024-11-19 10:05:02.159175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.486 [2024-11-19 10:05:02.171129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.486 [2024-11-19 10:05:02.171183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.486 [2024-11-19 10:05:02.183112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.486 [2024-11-19 10:05:02.183160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.486 [2024-11-19 10:05:02.195109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.486 [2024-11-19 10:05:02.195156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.486 [2024-11-19 10:05:02.207113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.486 [2024-11-19 10:05:02.207162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.486 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65419) - No such process 00:10:48.486 10:05:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65419 00:10:48.486 10:05:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.486 10:05:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.486 10:05:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.486 10:05:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.486 10:05:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:48.486 10:05:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.486 10:05:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.486 delay0 00:10:48.486 10:05:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.486 10:05:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:48.486 10:05:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.486 10:05:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.486 10:05:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.486 10:05:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:48.744 [2024-11-19 10:05:02.437313] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:55.315 Initializing NVMe Controllers 00:10:55.315 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:55.315 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:55.315 Initialization complete. Launching workers. 00:10:55.315 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 853 00:10:55.315 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1140, failed to submit 33 00:10:55.315 success 1020, unsuccessful 120, failed 0 00:10:55.315 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:55.315 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:55.315 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:55.315 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:55.315 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:55.315 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:55.315 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:55.315 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:55.315 rmmod nvme_tcp 00:10:55.315 rmmod nvme_fabrics 00:10:55.315 rmmod nvme_keyring 00:10:55.315 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:55.315 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:55.315 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:55.315 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65263 ']' 00:10:55.315 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65263 00:10:55.315 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65263 ']' 00:10:55.315 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65263 00:10:55.315 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:55.315 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.315 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65263 00:10:55.315 killing process with pid 65263 00:10:55.316 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:55.316 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:55.316 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65263' 00:10:55.316 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65263 00:10:55.316 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65263 00:10:55.316 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:55.316 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:55.316 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:55.316 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:55.316 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:55.316 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:55.316 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:55.316 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:55.316 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:55.316 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:55.316 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:55.316 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:55.316 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:55.316 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:55.316 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:55.316 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:55.316 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:55.316 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:55.316 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:55.316 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:55.316 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:55.316 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:55.316 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:55.316 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.316 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.316 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:10:55.573 ************************************ 00:10:55.573 END TEST nvmf_zcopy 00:10:55.573 ************************************ 00:10:55.573 00:10:55.573 real 0m25.228s 00:10:55.573 user 0m40.190s 00:10:55.573 sys 0m7.348s 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:55.573 ************************************ 00:10:55.573 START TEST nvmf_nmic 00:10:55.573 ************************************ 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:55.573 * Looking for test storage... 00:10:55.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:55.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.573 --rc genhtml_branch_coverage=1 00:10:55.573 --rc genhtml_function_coverage=1 00:10:55.573 --rc genhtml_legend=1 00:10:55.573 --rc geninfo_all_blocks=1 00:10:55.573 --rc geninfo_unexecuted_blocks=1 00:10:55.573 00:10:55.573 ' 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:55.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.573 --rc genhtml_branch_coverage=1 00:10:55.573 --rc genhtml_function_coverage=1 00:10:55.573 --rc genhtml_legend=1 00:10:55.573 --rc geninfo_all_blocks=1 00:10:55.573 --rc geninfo_unexecuted_blocks=1 00:10:55.573 00:10:55.573 ' 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:55.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.573 --rc genhtml_branch_coverage=1 00:10:55.573 --rc genhtml_function_coverage=1 00:10:55.573 --rc genhtml_legend=1 00:10:55.573 --rc geninfo_all_blocks=1 00:10:55.573 --rc geninfo_unexecuted_blocks=1 00:10:55.573 00:10:55.573 ' 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:55.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.573 --rc genhtml_branch_coverage=1 00:10:55.573 --rc genhtml_function_coverage=1 00:10:55.573 --rc genhtml_legend=1 00:10:55.573 --rc geninfo_all_blocks=1 00:10:55.573 --rc geninfo_unexecuted_blocks=1 00:10:55.573 00:10:55.573 ' 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.573 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.832 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:10:55.832 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:10:55.832 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.832 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.832 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:55.832 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.832 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:55.832 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:55.832 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.832 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.832 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:55.833 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:55.833 Cannot find device "nvmf_init_br" 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:55.833 Cannot find device "nvmf_init_br2" 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:55.833 Cannot find device "nvmf_tgt_br" 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:55.833 Cannot find device "nvmf_tgt_br2" 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:55.833 Cannot find device "nvmf_init_br" 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:55.833 Cannot find device "nvmf_init_br2" 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:55.833 Cannot find device "nvmf_tgt_br" 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:55.833 Cannot find device "nvmf_tgt_br2" 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:55.833 Cannot find device "nvmf_br" 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:55.833 Cannot find device "nvmf_init_if" 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:55.833 Cannot find device "nvmf_init_if2" 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:55.833 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:55.833 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:55.833 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:55.834 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:55.834 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:55.834 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:55.834 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:55.834 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:55.834 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:56.093 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:56.093 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:10:56.093 00:10:56.093 --- 10.0.0.3 ping statistics --- 00:10:56.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.093 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:56.093 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:56.093 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:10:56.093 00:10:56.093 --- 10.0.0.4 ping statistics --- 00:10:56.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.093 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:56.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:10:56.093 00:10:56.093 --- 10.0.0.1 ping statistics --- 00:10:56.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.093 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:56.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:10:56.093 00:10:56.093 --- 10.0.0.2 ping statistics --- 00:10:56.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.093 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:56.093 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:56.094 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.094 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65799 00:10:56.094 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:56.094 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65799 00:10:56.094 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 65799 ']' 00:10:56.094 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.094 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.094 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.094 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.094 10:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.094 [2024-11-19 10:05:09.963275] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:10:56.094 [2024-11-19 10:05:09.963381] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.352 [2024-11-19 10:05:10.119673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.352 [2024-11-19 10:05:10.192497] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.352 [2024-11-19 10:05:10.192749] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.352 [2024-11-19 10:05:10.192961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.352 [2024-11-19 10:05:10.193153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.352 [2024-11-19 10:05:10.193333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.352 [2024-11-19 10:05:10.194669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.352 [2024-11-19 10:05:10.194742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.352 [2024-11-19 10:05:10.194875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.352 [2024-11-19 10:05:10.194880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.611 [2024-11-19 10:05:10.253099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.611 [2024-11-19 10:05:10.371481] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.611 Malloc0 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.611 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.611 [2024-11-19 10:05:10.434036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:56.611 test case1: single bdev can't be used in multiple subsystems 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.612 [2024-11-19 10:05:10.457841] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:56.612 [2024-11-19 10:05:10.457894] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:56.612 [2024-11-19 10:05:10.457909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.612 request: 00:10:56.612 { 00:10:56.612 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:56.612 "namespace": { 00:10:56.612 "bdev_name": "Malloc0", 00:10:56.612 "no_auto_visible": false 00:10:56.612 }, 00:10:56.612 "method": "nvmf_subsystem_add_ns", 00:10:56.612 "req_id": 1 00:10:56.612 } 00:10:56.612 Got JSON-RPC error response 00:10:56.612 response: 00:10:56.612 { 00:10:56.612 "code": -32602, 00:10:56.612 "message": "Invalid parameters" 00:10:56.612 } 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:56.612 Adding namespace failed - expected result. 00:10:56.612 test case2: host connect to nvmf target in multiple paths 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.612 [2024-11-19 10:05:10.474074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.612 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid=6147973c-080a-4377-b1e7-85172bdc559a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:56.870 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid=6147973c-080a-4377-b1e7-85172bdc559a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:10:56.870 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:56.870 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:56.870 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:56.870 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:56.870 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:59.403 10:05:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:59.403 10:05:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:59.403 10:05:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:59.403 10:05:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:59.403 10:05:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:59.403 10:05:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:59.403 10:05:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:59.403 [global] 00:10:59.403 thread=1 00:10:59.403 invalidate=1 00:10:59.403 rw=write 00:10:59.403 time_based=1 00:10:59.403 runtime=1 00:10:59.403 ioengine=libaio 00:10:59.403 direct=1 00:10:59.403 bs=4096 00:10:59.403 iodepth=1 00:10:59.403 norandommap=0 00:10:59.403 numjobs=1 00:10:59.403 00:10:59.403 verify_dump=1 00:10:59.403 verify_backlog=512 00:10:59.403 verify_state_save=0 00:10:59.403 do_verify=1 00:10:59.403 verify=crc32c-intel 00:10:59.403 [job0] 00:10:59.403 filename=/dev/nvme0n1 00:10:59.403 Could not set queue depth (nvme0n1) 00:10:59.403 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.403 fio-3.35 00:10:59.403 Starting 1 thread 00:11:00.337 00:11:00.337 job0: (groupid=0, jobs=1): err= 0: pid=65883: Tue Nov 19 10:05:14 2024 00:11:00.337 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:11:00.337 slat (usec): min=12, max=170, avg=16.07, stdev= 7.11 00:11:00.337 clat (usec): min=133, max=306, avg=169.43, stdev=16.55 00:11:00.337 lat (usec): min=148, max=357, avg=185.50, stdev=20.79 00:11:00.337 clat percentiles (usec): 00:11:00.337 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:11:00.337 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:11:00.337 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 198], 00:11:00.337 | 99.00th=[ 223], 99.50th=[ 237], 99.90th=[ 285], 99.95th=[ 306], 00:11:00.337 | 99.99th=[ 306] 00:11:00.337 write: IOPS=3203, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1001msec); 0 zone resets 00:11:00.337 slat (usec): min=13, max=129, avg=23.53, stdev= 9.61 00:11:00.337 clat (usec): min=76, max=268, avg=107.14, stdev=14.43 00:11:00.337 lat (usec): min=103, max=397, avg=130.68, stdev=21.28 00:11:00.337 clat percentiles (usec): 00:11:00.337 | 1.00th=[ 88], 5.00th=[ 90], 10.00th=[ 93], 20.00th=[ 97], 00:11:00.337 | 30.00th=[ 100], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 108], 00:11:00.337 | 70.00th=[ 111], 80.00th=[ 116], 90.00th=[ 125], 95.00th=[ 135], 00:11:00.337 | 99.00th=[ 161], 99.50th=[ 176], 99.90th=[ 202], 99.95th=[ 210], 00:11:00.337 | 99.99th=[ 269] 00:11:00.337 bw ( KiB/s): min=12288, max=12288, per=95.89%, avg=12288.00, stdev= 0.00, samples=1 00:11:00.337 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:00.337 lat (usec) : 100=16.09%, 250=83.71%, 500=0.21% 00:11:00.337 cpu : usr=3.00%, sys=9.60%, ctx=6282, majf=0, minf=5 00:11:00.337 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.337 issued rwts: total=3072,3207,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.337 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.337 00:11:00.337 Run status group 0 (all jobs): 00:11:00.337 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:11:00.337 WRITE: bw=12.5MiB/s (13.1MB/s), 12.5MiB/s-12.5MiB/s (13.1MB/s-13.1MB/s), io=12.5MiB (13.1MB), run=1001-1001msec 00:11:00.337 00:11:00.337 Disk stats (read/write): 00:11:00.337 nvme0n1: ios=2638/3072, merge=0/0, ticks=481/350, in_queue=831, util=91.37% 00:11:00.337 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:00.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:00.337 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:00.337 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:00.337 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:00.337 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.337 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:00.337 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.337 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:00.337 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:00.337 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:00.337 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:00.337 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:00.337 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:00.337 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:00.337 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:00.337 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:00.337 rmmod nvme_tcp 00:11:00.337 rmmod nvme_fabrics 00:11:00.595 rmmod nvme_keyring 00:11:00.595 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:00.595 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:00.595 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:00.595 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65799 ']' 00:11:00.595 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65799 00:11:00.595 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 65799 ']' 00:11:00.595 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 65799 00:11:00.595 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:00.595 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.595 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65799 00:11:00.595 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.595 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.595 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65799' 00:11:00.595 killing process with pid 65799 00:11:00.595 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 65799 00:11:00.595 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 65799 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.853 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.111 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:11:01.111 00:11:01.111 real 0m5.484s 00:11:01.111 user 0m15.895s 00:11:01.111 sys 0m2.364s 00:11:01.111 ************************************ 00:11:01.111 END TEST nvmf_nmic 00:11:01.111 ************************************ 00:11:01.111 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.111 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.111 10:05:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:01.111 10:05:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:01.111 10:05:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.111 10:05:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:01.111 ************************************ 00:11:01.111 START TEST nvmf_fio_target 00:11:01.111 ************************************ 00:11:01.111 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:01.111 * Looking for test storage... 00:11:01.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:01.111 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:01.111 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:01.111 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:01.111 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:01.111 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.112 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.112 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.112 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.112 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.112 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.112 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.112 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.112 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.112 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.112 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.112 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:01.112 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:01.112 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.112 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.112 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:01.112 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:01.112 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.112 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:01.112 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.112 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:01.112 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:01.112 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.112 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:01.371 10:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:01.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.371 --rc genhtml_branch_coverage=1 00:11:01.371 --rc genhtml_function_coverage=1 00:11:01.371 --rc genhtml_legend=1 00:11:01.371 --rc geninfo_all_blocks=1 00:11:01.371 --rc geninfo_unexecuted_blocks=1 00:11:01.371 00:11:01.371 ' 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:01.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.371 --rc genhtml_branch_coverage=1 00:11:01.371 --rc genhtml_function_coverage=1 00:11:01.371 --rc genhtml_legend=1 00:11:01.371 --rc geninfo_all_blocks=1 00:11:01.371 --rc geninfo_unexecuted_blocks=1 00:11:01.371 00:11:01.371 ' 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:01.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.371 --rc genhtml_branch_coverage=1 00:11:01.371 --rc genhtml_function_coverage=1 00:11:01.371 --rc genhtml_legend=1 00:11:01.371 --rc geninfo_all_blocks=1 00:11:01.371 --rc geninfo_unexecuted_blocks=1 00:11:01.371 00:11:01.371 ' 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:01.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.371 --rc genhtml_branch_coverage=1 00:11:01.371 --rc genhtml_function_coverage=1 00:11:01.371 --rc genhtml_legend=1 00:11:01.371 --rc geninfo_all_blocks=1 00:11:01.371 --rc geninfo_unexecuted_blocks=1 00:11:01.371 00:11:01.371 ' 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.371 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:01.372 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:01.372 Cannot find device "nvmf_init_br" 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:01.372 Cannot find device "nvmf_init_br2" 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:01.372 Cannot find device "nvmf_tgt_br" 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:01.372 Cannot find device "nvmf_tgt_br2" 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:01.372 Cannot find device "nvmf_init_br" 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:01.372 Cannot find device "nvmf_init_br2" 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:01.372 Cannot find device "nvmf_tgt_br" 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:01.372 Cannot find device "nvmf_tgt_br2" 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:01.372 Cannot find device "nvmf_br" 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:01.372 Cannot find device "nvmf_init_if" 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:01.372 Cannot find device "nvmf_init_if2" 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:01.372 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:01.372 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:01.372 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:01.632 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:01.632 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:11:01.632 00:11:01.632 --- 10.0.0.3 ping statistics --- 00:11:01.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.632 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:01.632 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:01.632 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:11:01.632 00:11:01.632 --- 10.0.0.4 ping statistics --- 00:11:01.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.632 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:01.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:11:01.632 00:11:01.632 --- 10.0.0.1 ping statistics --- 00:11:01.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.632 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:01.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:11:01.632 00:11:01.632 --- 10.0.0.2 ping statistics --- 00:11:01.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.632 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:01.632 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.633 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:01.633 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:01.633 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.633 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:01.633 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:01.633 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:01.633 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:01.633 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.633 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.633 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66110 00:11:01.633 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66110 00:11:01.633 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66110 ']' 00:11:01.633 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:01.633 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.633 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.633 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.633 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.633 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.633 [2024-11-19 10:05:15.518577] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:11:01.633 [2024-11-19 10:05:15.518682] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.891 [2024-11-19 10:05:15.668701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:01.891 [2024-11-19 10:05:15.731814] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.891 [2024-11-19 10:05:15.731873] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.891 [2024-11-19 10:05:15.731885] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.891 [2024-11-19 10:05:15.731894] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.891 [2024-11-19 10:05:15.731901] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.891 [2024-11-19 10:05:15.733050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.891 [2024-11-19 10:05:15.733098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.891 [2024-11-19 10:05:15.733248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.891 [2024-11-19 10:05:15.733259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.150 [2024-11-19 10:05:15.803977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:02.150 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.150 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:02.150 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:02.150 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:02.150 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.150 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.150 10:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:02.408 [2024-11-19 10:05:16.213448] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:02.408 10:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:02.976 10:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:02.976 10:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:02.976 10:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:02.976 10:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:03.564 10:05:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:03.564 10:05:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:03.822 10:05:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:03.822 10:05:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:04.080 10:05:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:04.339 10:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:04.339 10:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:04.597 10:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:04.597 10:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:04.855 10:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:04.855 10:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:05.114 10:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:05.680 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:05.680 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:05.939 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:05.939 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:06.197 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:06.455 [2024-11-19 10:05:20.118726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:06.455 10:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:06.713 10:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:06.972 10:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid=6147973c-080a-4377-b1e7-85172bdc559a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:06.972 10:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:06.972 10:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:06.972 10:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:06.972 10:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:06.972 10:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:06.972 10:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:08.917 10:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:08.917 10:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:08.917 10:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.175 10:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:09.175 10:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.175 10:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:09.175 10:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:09.175 [global] 00:11:09.175 thread=1 00:11:09.175 invalidate=1 00:11:09.175 rw=write 00:11:09.175 time_based=1 00:11:09.175 runtime=1 00:11:09.175 ioengine=libaio 00:11:09.175 direct=1 00:11:09.175 bs=4096 00:11:09.175 iodepth=1 00:11:09.175 norandommap=0 00:11:09.175 numjobs=1 00:11:09.175 00:11:09.175 verify_dump=1 00:11:09.175 verify_backlog=512 00:11:09.175 verify_state_save=0 00:11:09.175 do_verify=1 00:11:09.175 verify=crc32c-intel 00:11:09.175 [job0] 00:11:09.175 filename=/dev/nvme0n1 00:11:09.175 [job1] 00:11:09.175 filename=/dev/nvme0n2 00:11:09.175 [job2] 00:11:09.175 filename=/dev/nvme0n3 00:11:09.175 [job3] 00:11:09.175 filename=/dev/nvme0n4 00:11:09.175 Could not set queue depth (nvme0n1) 00:11:09.175 Could not set queue depth (nvme0n2) 00:11:09.176 Could not set queue depth (nvme0n3) 00:11:09.176 Could not set queue depth (nvme0n4) 00:11:09.176 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.176 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.176 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.176 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.176 fio-3.35 00:11:09.176 Starting 4 threads 00:11:10.552 00:11:10.552 job0: (groupid=0, jobs=1): err= 0: pid=66298: Tue Nov 19 10:05:24 2024 00:11:10.552 read: IOPS=1969, BW=7876KiB/s (8065kB/s)(7884KiB/1001msec) 00:11:10.552 slat (nsec): min=8378, max=54620, avg=12258.49, stdev=3932.91 00:11:10.552 clat (usec): min=154, max=494, avg=267.72, stdev=45.65 00:11:10.552 lat (usec): min=168, max=509, avg=279.98, stdev=47.37 00:11:10.552 clat percentiles (usec): 00:11:10.552 | 1.00th=[ 221], 5.00th=[ 227], 10.00th=[ 229], 20.00th=[ 233], 00:11:10.552 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:11:10.552 | 70.00th=[ 269], 80.00th=[ 322], 90.00th=[ 343], 95.00th=[ 355], 00:11:10.552 | 99.00th=[ 388], 99.50th=[ 424], 99.90th=[ 494], 99.95th=[ 494], 00:11:10.552 | 99.99th=[ 494] 00:11:10.552 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:10.552 slat (usec): min=11, max=108, avg=20.60, stdev= 7.89 00:11:10.552 clat (usec): min=102, max=8002, avg=195.18, stdev=262.18 00:11:10.552 lat (usec): min=138, max=8042, avg=215.78, stdev=264.00 00:11:10.552 clat percentiles (usec): 00:11:10.552 | 1.00th=[ 128], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 151], 00:11:10.552 | 30.00th=[ 163], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 186], 00:11:10.552 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 245], 95.00th=[ 273], 00:11:10.552 | 99.00th=[ 306], 99.50th=[ 404], 99.90th=[ 3818], 99.95th=[ 7898], 00:11:10.552 | 99.99th=[ 8029] 00:11:10.552 bw ( KiB/s): min= 8192, max= 8192, per=24.60%, avg=8192.00, stdev= 0.00, samples=1 00:11:10.552 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:10.552 lat (usec) : 250=71.31%, 500=28.54%, 750=0.05% 00:11:10.552 lat (msec) : 4=0.05%, 10=0.05% 00:11:10.552 cpu : usr=1.40%, sys=5.70%, ctx=4022, majf=0, minf=17 00:11:10.552 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.552 issued rwts: total=1971,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.552 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.552 job1: (groupid=0, jobs=1): err= 0: pid=66299: Tue Nov 19 10:05:24 2024 00:11:10.552 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:10.552 slat (nsec): min=8615, max=28883, avg=12546.17, stdev=1732.24 00:11:10.552 clat (usec): min=178, max=865, avg=245.44, stdev=22.71 00:11:10.552 lat (usec): min=194, max=878, avg=257.98, stdev=22.65 00:11:10.552 clat percentiles (usec): 00:11:10.552 | 1.00th=[ 221], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 233], 00:11:10.552 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:11:10.552 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 273], 00:11:10.552 | 99.00th=[ 293], 99.50th=[ 297], 99.90th=[ 461], 99.95th=[ 578], 00:11:10.552 | 99.99th=[ 865] 00:11:10.552 write: IOPS=2058, BW=8236KiB/s (8433kB/s)(8244KiB/1001msec); 0 zone resets 00:11:10.552 slat (usec): min=10, max=112, avg=19.20, stdev= 8.83 00:11:10.552 clat (usec): min=123, max=517, avg=206.45, stdev=34.87 00:11:10.552 lat (usec): min=156, max=566, avg=225.65, stdev=40.43 00:11:10.552 clat percentiles (usec): 00:11:10.552 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 186], 00:11:10.552 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:11:10.552 | 70.00th=[ 206], 80.00th=[ 215], 90.00th=[ 258], 95.00th=[ 281], 00:11:10.552 | 99.00th=[ 363], 99.50th=[ 392], 99.90th=[ 441], 99.95th=[ 465], 00:11:10.552 | 99.99th=[ 519] 00:11:10.553 bw ( KiB/s): min= 8192, max= 8192, per=24.60%, avg=8192.00, stdev= 0.00, samples=1 00:11:10.553 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:10.553 lat (usec) : 250=78.63%, 500=21.29%, 750=0.05%, 1000=0.02% 00:11:10.553 cpu : usr=1.90%, sys=5.30%, ctx=4110, majf=0, minf=7 00:11:10.553 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.553 issued rwts: total=2048,2061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.553 job2: (groupid=0, jobs=1): err= 0: pid=66300: Tue Nov 19 10:05:24 2024 00:11:10.553 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:10.553 slat (nsec): min=8344, max=70376, avg=12004.49, stdev=3675.53 00:11:10.553 clat (usec): min=177, max=877, avg=246.22, stdev=21.03 00:11:10.553 lat (usec): min=197, max=887, avg=258.22, stdev=21.86 00:11:10.553 clat percentiles (usec): 00:11:10.553 | 1.00th=[ 223], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 235], 00:11:10.553 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 247], 00:11:10.553 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 273], 00:11:10.553 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 449], 99.95th=[ 453], 00:11:10.553 | 99.99th=[ 881] 00:11:10.553 write: IOPS=2058, BW=8236KiB/s (8433kB/s)(8244KiB/1001msec); 0 zone resets 00:11:10.553 slat (nsec): min=17074, max=91771, avg=23537.37, stdev=8142.61 00:11:10.553 clat (usec): min=105, max=532, avg=201.91, stdev=34.88 00:11:10.553 lat (usec): min=123, max=566, avg=225.44, stdev=40.12 00:11:10.553 clat percentiles (usec): 00:11:10.553 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 182], 00:11:10.553 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:11:10.553 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 255], 95.00th=[ 277], 00:11:10.553 | 99.00th=[ 355], 99.50th=[ 383], 99.90th=[ 449], 99.95th=[ 453], 00:11:10.553 | 99.99th=[ 529] 00:11:10.553 bw ( KiB/s): min= 8192, max= 8192, per=24.60%, avg=8192.00, stdev= 0.00, samples=1 00:11:10.553 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:10.553 lat (usec) : 250=78.34%, 500=21.61%, 750=0.02%, 1000=0.02% 00:11:10.553 cpu : usr=1.70%, sys=6.20%, ctx=4111, majf=0, minf=7 00:11:10.553 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.553 issued rwts: total=2048,2061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.553 job3: (groupid=0, jobs=1): err= 0: pid=66301: Tue Nov 19 10:05:24 2024 00:11:10.553 read: IOPS=2048, BW=8192KiB/s (8389kB/s)(8192KiB/1000msec) 00:11:10.553 slat (nsec): min=8296, max=53890, avg=13897.87, stdev=3609.25 00:11:10.553 clat (usec): min=196, max=533, avg=269.08, stdev=50.02 00:11:10.553 lat (usec): min=220, max=550, avg=282.98, stdev=51.11 00:11:10.553 clat percentiles (usec): 00:11:10.553 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 227], 20.00th=[ 231], 00:11:10.553 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 255], 00:11:10.553 | 70.00th=[ 273], 80.00th=[ 322], 90.00th=[ 343], 95.00th=[ 359], 00:11:10.553 | 99.00th=[ 429], 99.50th=[ 461], 99.90th=[ 510], 99.95th=[ 529], 00:11:10.553 | 99.99th=[ 537] 00:11:10.553 write: IOPS=2164, BW=8656KiB/s (8864kB/s)(8656KiB/1000msec); 0 zone resets 00:11:10.553 slat (nsec): min=11216, max=94560, avg=22464.19, stdev=9325.98 00:11:10.553 clat (usec): min=107, max=2949, avg=168.38, stdev=65.64 00:11:10.553 lat (usec): min=140, max=3044, avg=190.84, stdev=66.05 00:11:10.553 clat percentiles (usec): 00:11:10.553 | 1.00th=[ 117], 5.00th=[ 125], 10.00th=[ 133], 20.00th=[ 143], 00:11:10.553 | 30.00th=[ 149], 40.00th=[ 157], 50.00th=[ 165], 60.00th=[ 180], 00:11:10.553 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 208], 00:11:10.553 | 99.00th=[ 235], 99.50th=[ 243], 99.90th=[ 260], 99.95th=[ 269], 00:11:10.553 | 99.99th=[ 2966] 00:11:10.553 bw ( KiB/s): min= 8520, max= 8520, per=25.58%, avg=8520.00, stdev= 0.00, samples=1 00:11:10.553 iops : min= 2130, max= 2130, avg=2130.00, stdev= 0.00, samples=1 00:11:10.553 lat (usec) : 250=77.23%, 500=22.67%, 750=0.07% 00:11:10.553 lat (msec) : 4=0.02% 00:11:10.553 cpu : usr=2.20%, sys=6.10%, ctx=4212, majf=0, minf=5 00:11:10.553 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.553 issued rwts: total=2048,2164,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.553 00:11:10.553 Run status group 0 (all jobs): 00:11:10.553 READ: bw=31.7MiB/s (33.2MB/s), 7876KiB/s-8192KiB/s (8065kB/s-8389kB/s), io=31.7MiB (33.2MB), run=1000-1001msec 00:11:10.553 WRITE: bw=32.5MiB/s (34.1MB/s), 8184KiB/s-8656KiB/s (8380kB/s-8864kB/s), io=32.6MiB (34.1MB), run=1000-1001msec 00:11:10.553 00:11:10.553 Disk stats (read/write): 00:11:10.553 nvme0n1: ios=1586/1924, merge=0/0, ticks=451/361, in_queue=812, util=87.17% 00:11:10.553 nvme0n2: ios=1574/2021, merge=0/0, ticks=388/363, in_queue=751, util=88.65% 00:11:10.553 nvme0n3: ios=1536/2019, merge=0/0, ticks=363/419, in_queue=782, util=89.14% 00:11:10.553 nvme0n4: ios=1609/2048, merge=0/0, ticks=453/333, in_queue=786, util=89.70% 00:11:10.553 10:05:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:10.553 [global] 00:11:10.553 thread=1 00:11:10.553 invalidate=1 00:11:10.553 rw=randwrite 00:11:10.553 time_based=1 00:11:10.553 runtime=1 00:11:10.553 ioengine=libaio 00:11:10.553 direct=1 00:11:10.553 bs=4096 00:11:10.553 iodepth=1 00:11:10.553 norandommap=0 00:11:10.553 numjobs=1 00:11:10.553 00:11:10.553 verify_dump=1 00:11:10.553 verify_backlog=512 00:11:10.553 verify_state_save=0 00:11:10.553 do_verify=1 00:11:10.553 verify=crc32c-intel 00:11:10.553 [job0] 00:11:10.553 filename=/dev/nvme0n1 00:11:10.553 [job1] 00:11:10.553 filename=/dev/nvme0n2 00:11:10.553 [job2] 00:11:10.553 filename=/dev/nvme0n3 00:11:10.553 [job3] 00:11:10.553 filename=/dev/nvme0n4 00:11:10.553 Could not set queue depth (nvme0n1) 00:11:10.553 Could not set queue depth (nvme0n2) 00:11:10.553 Could not set queue depth (nvme0n3) 00:11:10.553 Could not set queue depth (nvme0n4) 00:11:10.553 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.553 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.553 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.553 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.553 fio-3.35 00:11:10.553 Starting 4 threads 00:11:11.970 00:11:11.970 job0: (groupid=0, jobs=1): err= 0: pid=66359: Tue Nov 19 10:05:25 2024 00:11:11.970 read: IOPS=3072, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1000msec) 00:11:11.970 slat (nsec): min=10306, max=27653, avg=12038.34, stdev=1505.66 00:11:11.970 clat (usec): min=129, max=2177, avg=158.18, stdev=53.19 00:11:11.970 lat (usec): min=141, max=2195, avg=170.22, stdev=53.47 00:11:11.970 clat percentiles (usec): 00:11:11.970 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 147], 00:11:11.970 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 157], 00:11:11.970 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 176], 00:11:11.970 | 99.00th=[ 194], 99.50th=[ 247], 99.90th=[ 963], 99.95th=[ 1582], 00:11:11.970 | 99.99th=[ 2180] 00:11:11.970 write: IOPS=3477, BW=13.6MiB/s (14.2MB/s)(13.6MiB/1000msec); 0 zone resets 00:11:11.970 slat (nsec): min=12918, max=91486, avg=18123.76, stdev=2885.48 00:11:11.970 clat (usec): min=89, max=1755, avg=116.22, stdev=30.25 00:11:11.970 lat (usec): min=106, max=1773, avg=134.34, stdev=30.48 00:11:11.970 clat percentiles (usec): 00:11:11.970 | 1.00th=[ 95], 5.00th=[ 99], 10.00th=[ 102], 20.00th=[ 106], 00:11:11.970 | 30.00th=[ 110], 40.00th=[ 112], 50.00th=[ 115], 60.00th=[ 118], 00:11:11.970 | 70.00th=[ 122], 80.00th=[ 126], 90.00th=[ 131], 95.00th=[ 137], 00:11:11.970 | 99.00th=[ 149], 99.50th=[ 153], 99.90th=[ 176], 99.95th=[ 269], 00:11:11.970 | 99.99th=[ 1762] 00:11:11.970 bw ( KiB/s): min=13568, max=13568, per=31.90%, avg=13568.00, stdev= 0.00, samples=1 00:11:11.970 iops : min= 3392, max= 3392, avg=3392.00, stdev= 0.00, samples=1 00:11:11.970 lat (usec) : 100=3.66%, 250=96.08%, 500=0.15%, 750=0.02%, 1000=0.03% 00:11:11.970 lat (msec) : 2=0.05%, 4=0.02% 00:11:11.970 cpu : usr=2.10%, sys=8.20%, ctx=6550, majf=0, minf=11 00:11:11.970 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.970 issued rwts: total=3072,3477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.970 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.970 job1: (groupid=0, jobs=1): err= 0: pid=66360: Tue Nov 19 10:05:25 2024 00:11:11.970 read: IOPS=1712, BW=6849KiB/s (7014kB/s)(6856KiB/1001msec) 00:11:11.970 slat (nsec): min=12172, max=40627, avg=14732.13, stdev=2413.87 00:11:11.970 clat (usec): min=165, max=2558, avg=287.09, stdev=68.85 00:11:11.970 lat (usec): min=181, max=2590, avg=301.82, stdev=69.48 00:11:11.970 clat percentiles (usec): 00:11:11.970 | 1.00th=[ 235], 5.00th=[ 255], 10.00th=[ 262], 20.00th=[ 269], 00:11:11.970 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:11:11.970 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 334], 00:11:11.970 | 99.00th=[ 424], 99.50th=[ 515], 99.90th=[ 1205], 99.95th=[ 2573], 00:11:11.970 | 99.99th=[ 2573] 00:11:11.970 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:11.970 slat (usec): min=16, max=661, avg=22.72, stdev=14.97 00:11:11.970 clat (usec): min=97, max=2054, avg=209.70, stdev=67.11 00:11:11.970 lat (usec): min=115, max=2075, avg=232.42, stdev=69.43 00:11:11.970 clat percentiles (usec): 00:11:11.970 | 1.00th=[ 110], 5.00th=[ 128], 10.00th=[ 174], 20.00th=[ 196], 00:11:11.970 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:11:11.970 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 249], 00:11:11.970 | 99.00th=[ 375], 99.50th=[ 392], 99.90th=[ 668], 99.95th=[ 1614], 00:11:11.970 | 99.99th=[ 2057] 00:11:11.970 bw ( KiB/s): min= 8192, max= 8192, per=19.26%, avg=8192.00, stdev= 0.00, samples=1 00:11:11.970 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:11.970 lat (usec) : 100=0.08%, 250=53.14%, 500=46.33%, 750=0.35% 00:11:11.970 lat (msec) : 2=0.05%, 4=0.05% 00:11:11.970 cpu : usr=1.80%, sys=5.30%, ctx=3765, majf=0, minf=21 00:11:11.970 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.970 issued rwts: total=1714,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.970 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.970 job2: (groupid=0, jobs=1): err= 0: pid=66361: Tue Nov 19 10:05:25 2024 00:11:11.970 read: IOPS=1687, BW=6749KiB/s (6911kB/s)(6756KiB/1001msec) 00:11:11.970 slat (nsec): min=11665, max=48991, avg=15699.77, stdev=4033.44 00:11:11.970 clat (usec): min=166, max=685, avg=286.06, stdev=43.78 00:11:11.970 lat (usec): min=181, max=714, avg=301.76, stdev=45.33 00:11:11.970 clat percentiles (usec): 00:11:11.970 | 1.00th=[ 223], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 265], 00:11:11.970 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:11:11.970 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 326], 00:11:11.970 | 99.00th=[ 502], 99.50th=[ 515], 99.90th=[ 619], 99.95th=[ 685], 00:11:11.970 | 99.99th=[ 685] 00:11:11.970 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:11.970 slat (nsec): min=16769, max=79141, avg=22249.01, stdev=5969.42 00:11:11.970 clat (usec): min=111, max=6478, avg=213.89, stdev=182.58 00:11:11.970 lat (usec): min=132, max=6497, avg=236.14, stdev=182.85 00:11:11.970 clat percentiles (usec): 00:11:11.970 | 1.00th=[ 119], 5.00th=[ 143], 10.00th=[ 184], 20.00th=[ 196], 00:11:11.970 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:11:11.970 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 231], 95.00th=[ 241], 00:11:11.970 | 99.00th=[ 293], 99.50th=[ 383], 99.90th=[ 3294], 99.95th=[ 3785], 00:11:11.970 | 99.99th=[ 6456] 00:11:11.970 bw ( KiB/s): min= 8192, max= 8192, per=19.26%, avg=8192.00, stdev= 0.00, samples=1 00:11:11.970 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:11.970 lat (usec) : 250=55.31%, 500=44.05%, 750=0.54% 00:11:11.970 lat (msec) : 4=0.08%, 10=0.03% 00:11:11.970 cpu : usr=1.50%, sys=5.80%, ctx=3737, majf=0, minf=11 00:11:11.970 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.970 issued rwts: total=1689,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.970 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.970 job3: (groupid=0, jobs=1): err= 0: pid=66362: Tue Nov 19 10:05:25 2024 00:11:11.970 read: IOPS=2768, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec) 00:11:11.970 slat (nsec): min=11471, max=29428, avg=13023.14, stdev=1650.31 00:11:11.970 clat (usec): min=147, max=621, avg=176.75, stdev=17.99 00:11:11.970 lat (usec): min=160, max=635, avg=189.78, stdev=18.17 00:11:11.970 clat percentiles (usec): 00:11:11.970 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:11:11.970 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 178], 00:11:11.971 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 200], 00:11:11.971 | 99.00th=[ 239], 99.50th=[ 260], 99.90th=[ 293], 99.95th=[ 478], 00:11:11.971 | 99.99th=[ 619] 00:11:11.971 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:11.971 slat (nsec): min=13642, max=93163, avg=18405.42, stdev=3129.33 00:11:11.971 clat (usec): min=104, max=379, avg=133.26, stdev=14.80 00:11:11.971 lat (usec): min=121, max=396, avg=151.67, stdev=15.30 00:11:11.971 clat percentiles (usec): 00:11:11.971 | 1.00th=[ 110], 5.00th=[ 114], 10.00th=[ 117], 20.00th=[ 122], 00:11:11.971 | 30.00th=[ 125], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 137], 00:11:11.971 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:11:11.971 | 99.00th=[ 174], 99.50th=[ 184], 99.90th=[ 217], 99.95th=[ 258], 00:11:11.971 | 99.99th=[ 379] 00:11:11.971 bw ( KiB/s): min=12288, max=12288, per=28.89%, avg=12288.00, stdev= 0.00, samples=1 00:11:11.971 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:11.971 lat (usec) : 250=99.64%, 500=0.34%, 750=0.02% 00:11:11.971 cpu : usr=2.30%, sys=7.00%, ctx=5844, majf=0, minf=5 00:11:11.971 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.971 issued rwts: total=2771,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.971 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.971 00:11:11.971 Run status group 0 (all jobs): 00:11:11.971 READ: bw=36.1MiB/s (37.8MB/s), 6749KiB/s-12.0MiB/s (6911kB/s-12.6MB/s), io=36.1MiB (37.9MB), run=1000-1001msec 00:11:11.971 WRITE: bw=41.5MiB/s (43.6MB/s), 8184KiB/s-13.6MiB/s (8380kB/s-14.2MB/s), io=41.6MiB (43.6MB), run=1000-1001msec 00:11:11.971 00:11:11.971 Disk stats (read/write): 00:11:11.971 nvme0n1: ios=2610/3047, merge=0/0, ticks=423/384, in_queue=807, util=87.17% 00:11:11.971 nvme0n2: ios=1568/1660, merge=0/0, ticks=471/359, in_queue=830, util=87.91% 00:11:11.971 nvme0n3: ios=1536/1659, merge=0/0, ticks=451/358, in_queue=809, util=88.96% 00:11:11.971 nvme0n4: ios=2420/2560, merge=0/0, ticks=428/363, in_queue=791, util=89.62% 00:11:11.971 10:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:11.971 [global] 00:11:11.971 thread=1 00:11:11.971 invalidate=1 00:11:11.971 rw=write 00:11:11.971 time_based=1 00:11:11.971 runtime=1 00:11:11.971 ioengine=libaio 00:11:11.971 direct=1 00:11:11.971 bs=4096 00:11:11.971 iodepth=128 00:11:11.971 norandommap=0 00:11:11.971 numjobs=1 00:11:11.971 00:11:11.971 verify_dump=1 00:11:11.971 verify_backlog=512 00:11:11.971 verify_state_save=0 00:11:11.971 do_verify=1 00:11:11.971 verify=crc32c-intel 00:11:11.971 [job0] 00:11:11.971 filename=/dev/nvme0n1 00:11:11.971 [job1] 00:11:11.971 filename=/dev/nvme0n2 00:11:11.971 [job2] 00:11:11.971 filename=/dev/nvme0n3 00:11:11.971 [job3] 00:11:11.971 filename=/dev/nvme0n4 00:11:11.971 Could not set queue depth (nvme0n1) 00:11:11.971 Could not set queue depth (nvme0n2) 00:11:11.971 Could not set queue depth (nvme0n3) 00:11:11.971 Could not set queue depth (nvme0n4) 00:11:11.971 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.971 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.971 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.971 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.971 fio-3.35 00:11:11.971 Starting 4 threads 00:11:13.351 00:11:13.351 job0: (groupid=0, jobs=1): err= 0: pid=66420: Tue Nov 19 10:05:26 2024 00:11:13.351 read: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec) 00:11:13.351 slat (usec): min=4, max=10857, avg=210.45, stdev=941.79 00:11:13.351 clat (usec): min=15677, max=37564, avg=26901.49, stdev=4104.10 00:11:13.351 lat (usec): min=18073, max=37576, avg=27111.95, stdev=4089.31 00:11:13.351 clat percentiles (usec): 00:11:13.351 | 1.00th=[18744], 5.00th=[20055], 10.00th=[21103], 20.00th=[23987], 00:11:13.351 | 30.00th=[24511], 40.00th=[25297], 50.00th=[26084], 60.00th=[27919], 00:11:13.351 | 70.00th=[29230], 80.00th=[31065], 90.00th=[32900], 95.00th=[33817], 00:11:13.351 | 99.00th=[35914], 99.50th=[36439], 99.90th=[37487], 99.95th=[37487], 00:11:13.351 | 99.99th=[37487] 00:11:13.351 write: IOPS=2511, BW=9.81MiB/s (10.3MB/s)(9.84MiB/1003msec); 0 zone resets 00:11:13.351 slat (usec): min=8, max=6267, avg=217.72, stdev=767.42 00:11:13.352 clat (usec): min=2259, max=52990, avg=28138.34, stdev=9479.10 00:11:13.352 lat (usec): min=2282, max=53017, avg=28356.06, stdev=9517.33 00:11:13.352 clat percentiles (usec): 00:11:13.352 | 1.00th=[ 7177], 5.00th=[16188], 10.00th=[17957], 20.00th=[20317], 00:11:13.352 | 30.00th=[21890], 40.00th=[23987], 50.00th=[26346], 60.00th=[29754], 00:11:13.352 | 70.00th=[32113], 80.00th=[35914], 90.00th=[42206], 95.00th=[47449], 00:11:13.352 | 99.00th=[52167], 99.50th=[52167], 99.90th=[53216], 99.95th=[53216], 00:11:13.352 | 99.99th=[53216] 00:11:13.352 bw ( KiB/s): min= 8520, max=10573, per=15.76%, avg=9546.50, stdev=1451.69, samples=2 00:11:13.352 iops : min= 2130, max= 2643, avg=2386.50, stdev=362.75, samples=2 00:11:13.352 lat (msec) : 4=0.35%, 10=0.53%, 20=9.39%, 50=88.64%, 100=1.09% 00:11:13.352 cpu : usr=2.40%, sys=7.39%, ctx=502, majf=0, minf=11 00:11:13.352 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:11:13.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.352 issued rwts: total=2048,2519,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.352 job1: (groupid=0, jobs=1): err= 0: pid=66423: Tue Nov 19 10:05:26 2024 00:11:13.352 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:11:13.352 slat (usec): min=5, max=8846, avg=101.16, stdev=563.70 00:11:13.352 clat (usec): min=6455, max=31510, avg=13717.23, stdev=4637.61 00:11:13.352 lat (usec): min=6468, max=31623, avg=13818.39, stdev=4679.32 00:11:13.352 clat percentiles (usec): 00:11:13.352 | 1.00th=[ 7373], 5.00th=[10421], 10.00th=[10683], 20.00th=[10945], 00:11:13.352 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11863], 00:11:13.352 | 70.00th=[12256], 80.00th=[17957], 90.00th=[21365], 95.00th=[24249], 00:11:13.352 | 99.00th=[27657], 99.50th=[27919], 99.90th=[28181], 99.95th=[28967], 00:11:13.352 | 99.99th=[31589] 00:11:13.352 write: IOPS=5032, BW=19.7MiB/s (20.6MB/s)(19.8MiB/1007msec); 0 zone resets 00:11:13.352 slat (usec): min=5, max=8176, avg=98.30, stdev=488.49 00:11:13.352 clat (usec): min=5502, max=28424, avg=12715.63, stdev=4312.11 00:11:13.352 lat (usec): min=6835, max=28449, avg=12813.93, stdev=4321.36 00:11:13.352 clat percentiles (usec): 00:11:13.352 | 1.00th=[ 7701], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[ 9896], 00:11:13.352 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:11:13.352 | 70.00th=[11600], 80.00th=[17433], 90.00th=[20317], 95.00th=[21365], 00:11:13.352 | 99.00th=[25297], 99.50th=[27132], 99.90th=[27919], 99.95th=[28181], 00:11:13.352 | 99.99th=[28443] 00:11:13.352 bw ( KiB/s): min=14952, max=24576, per=32.62%, avg=19764.00, stdev=6805.20, samples=2 00:11:13.352 iops : min= 3738, max= 6144, avg=4941.00, stdev=1701.30, samples=2 00:11:13.352 lat (msec) : 10=12.97%, 20=72.81%, 50=14.22% 00:11:13.352 cpu : usr=3.98%, sys=12.33%, ctx=595, majf=0, minf=8 00:11:13.352 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:13.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.352 issued rwts: total=4608,5068,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.352 job2: (groupid=0, jobs=1): err= 0: pid=66425: Tue Nov 19 10:05:26 2024 00:11:13.352 read: IOPS=4435, BW=17.3MiB/s (18.2MB/s)(17.4MiB/1003msec) 00:11:13.352 slat (usec): min=6, max=5747, avg=112.96, stdev=558.64 00:11:13.352 clat (usec): min=1490, max=24473, avg=14716.88, stdev=2897.70 00:11:13.352 lat (usec): min=4724, max=24491, avg=14829.84, stdev=2866.70 00:11:13.352 clat percentiles (usec): 00:11:13.352 | 1.00th=[ 9503], 5.00th=[12649], 10.00th=[12780], 20.00th=[13042], 00:11:13.352 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13435], 60.00th=[13698], 00:11:13.352 | 70.00th=[15926], 80.00th=[16188], 90.00th=[17957], 95.00th=[20579], 00:11:13.352 | 99.00th=[24511], 99.50th=[24511], 99.90th=[24511], 99.95th=[24511], 00:11:13.352 | 99.99th=[24511] 00:11:13.352 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:11:13.352 slat (usec): min=12, max=4256, avg=99.77, stdev=438.04 00:11:13.352 clat (usec): min=9699, max=21319, avg=13257.33, stdev=1509.56 00:11:13.352 lat (usec): min=10795, max=21340, avg=13357.09, stdev=1446.44 00:11:13.352 clat percentiles (usec): 00:11:13.352 | 1.00th=[10290], 5.00th=[12125], 10.00th=[12387], 20.00th=[12518], 00:11:13.352 | 30.00th=[12649], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:11:13.352 | 70.00th=[13173], 80.00th=[13566], 90.00th=[15401], 95.00th=[15795], 00:11:13.352 | 99.00th=[20055], 99.50th=[21103], 99.90th=[21365], 99.95th=[21365], 00:11:13.352 | 99.99th=[21365] 00:11:13.352 bw ( KiB/s): min=16384, max=20480, per=30.42%, avg=18432.00, stdev=2896.31, samples=2 00:11:13.352 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:11:13.352 lat (msec) : 2=0.01%, 10=0.83%, 20=94.90%, 50=4.26% 00:11:13.352 cpu : usr=4.49%, sys=12.77%, ctx=289, majf=0, minf=7 00:11:13.352 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:13.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.352 issued rwts: total=4449,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.352 job3: (groupid=0, jobs=1): err= 0: pid=66426: Tue Nov 19 10:05:26 2024 00:11:13.352 read: IOPS=2888, BW=11.3MiB/s (11.8MB/s)(11.4MiB/1008msec) 00:11:13.352 slat (usec): min=3, max=10579, avg=188.50, stdev=792.63 00:11:13.352 clat (usec): min=6627, max=35067, avg=24018.84, stdev=4022.88 00:11:13.352 lat (usec): min=8836, max=35083, avg=24207.34, stdev=4039.81 00:11:13.352 clat percentiles (usec): 00:11:13.352 | 1.00th=[11076], 5.00th=[17695], 10.00th=[19268], 20.00th=[20841], 00:11:13.352 | 30.00th=[22152], 40.00th=[23200], 50.00th=[24249], 60.00th=[25035], 00:11:13.352 | 70.00th=[26084], 80.00th=[27395], 90.00th=[28443], 95.00th=[30802], 00:11:13.352 | 99.00th=[32375], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:11:13.352 | 99.99th=[34866] 00:11:13.352 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:11:13.352 slat (usec): min=5, max=8850, avg=139.32, stdev=655.93 00:11:13.352 clat (usec): min=10286, max=28876, avg=18559.44, stdev=3527.75 00:11:13.352 lat (usec): min=10485, max=28906, avg=18698.76, stdev=3561.90 00:11:13.352 clat percentiles (usec): 00:11:13.352 | 1.00th=[11338], 5.00th=[12911], 10.00th=[14615], 20.00th=[15926], 00:11:13.352 | 30.00th=[16712], 40.00th=[17433], 50.00th=[17957], 60.00th=[18744], 00:11:13.352 | 70.00th=[19792], 80.00th=[21627], 90.00th=[23462], 95.00th=[24511], 00:11:13.352 | 99.00th=[27919], 99.50th=[28181], 99.90th=[28443], 99.95th=[28443], 00:11:13.352 | 99.99th=[28967] 00:11:13.352 bw ( KiB/s): min=12263, max=12288, per=20.26%, avg=12275.50, stdev=17.68, samples=2 00:11:13.352 iops : min= 3065, max= 3072, avg=3068.50, stdev= 4.95, samples=2 00:11:13.352 lat (msec) : 10=0.35%, 20=42.70%, 50=56.95% 00:11:13.352 cpu : usr=3.08%, sys=8.64%, ctx=706, majf=0, minf=11 00:11:13.352 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:11:13.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.352 issued rwts: total=2912,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.352 00:11:13.352 Run status group 0 (all jobs): 00:11:13.352 READ: bw=54.3MiB/s (57.0MB/s), 8167KiB/s-17.9MiB/s (8364kB/s-18.7MB/s), io=54.8MiB (57.4MB), run=1003-1008msec 00:11:13.352 WRITE: bw=59.2MiB/s (62.0MB/s), 9.81MiB/s-19.7MiB/s (10.3MB/s-20.6MB/s), io=59.6MiB (62.5MB), run=1003-1008msec 00:11:13.352 00:11:13.352 Disk stats (read/write): 00:11:13.352 nvme0n1: ios=2097/2048, merge=0/0, ticks=15851/15503, in_queue=31354, util=89.28% 00:11:13.352 nvme0n2: ios=4269/4608, merge=0/0, ticks=45200/42999, in_queue=88199, util=89.30% 00:11:13.352 nvme0n3: ios=3840/4096, merge=0/0, ticks=13143/11535, in_queue=24678, util=89.36% 00:11:13.352 nvme0n4: ios=2494/2560, merge=0/0, ticks=23178/14843, in_queue=38021, util=88.58% 00:11:13.352 10:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:13.352 [global] 00:11:13.352 thread=1 00:11:13.352 invalidate=1 00:11:13.352 rw=randwrite 00:11:13.352 time_based=1 00:11:13.352 runtime=1 00:11:13.352 ioengine=libaio 00:11:13.352 direct=1 00:11:13.352 bs=4096 00:11:13.352 iodepth=128 00:11:13.352 norandommap=0 00:11:13.352 numjobs=1 00:11:13.352 00:11:13.352 verify_dump=1 00:11:13.352 verify_backlog=512 00:11:13.352 verify_state_save=0 00:11:13.352 do_verify=1 00:11:13.352 verify=crc32c-intel 00:11:13.352 [job0] 00:11:13.352 filename=/dev/nvme0n1 00:11:13.352 [job1] 00:11:13.352 filename=/dev/nvme0n2 00:11:13.352 [job2] 00:11:13.352 filename=/dev/nvme0n3 00:11:13.352 [job3] 00:11:13.352 filename=/dev/nvme0n4 00:11:13.352 Could not set queue depth (nvme0n1) 00:11:13.352 Could not set queue depth (nvme0n2) 00:11:13.352 Could not set queue depth (nvme0n3) 00:11:13.352 Could not set queue depth (nvme0n4) 00:11:13.352 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.352 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.352 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.352 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.352 fio-3.35 00:11:13.352 Starting 4 threads 00:11:14.752 00:11:14.752 job0: (groupid=0, jobs=1): err= 0: pid=66480: Tue Nov 19 10:05:28 2024 00:11:14.752 read: IOPS=5672, BW=22.2MiB/s (23.2MB/s)(22.2MiB/1004msec) 00:11:14.752 slat (usec): min=5, max=5717, avg=80.07, stdev=492.04 00:11:14.752 clat (usec): min=1209, max=18303, avg=11238.99, stdev=1360.98 00:11:14.752 lat (usec): min=4772, max=21930, avg=11319.06, stdev=1383.97 00:11:14.752 clat percentiles (usec): 00:11:14.752 | 1.00th=[ 5735], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[10814], 00:11:14.752 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11469], 00:11:14.752 | 70.00th=[11600], 80.00th=[11731], 90.00th=[11994], 95.00th=[12125], 00:11:14.752 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18220], 99.95th=[18220], 00:11:14.752 | 99.99th=[18220] 00:11:14.752 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:11:14.752 slat (usec): min=10, max=6588, avg=81.56, stdev=471.63 00:11:14.752 clat (usec): min=5168, max=14168, avg=10284.31, stdev=856.99 00:11:14.752 lat (usec): min=5899, max=14362, avg=10365.87, stdev=743.73 00:11:14.752 clat percentiles (usec): 00:11:14.752 | 1.00th=[ 6783], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9765], 00:11:14.752 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:11:14.752 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11076], 95.00th=[11207], 00:11:14.752 | 99.00th=[13173], 99.50th=[13304], 99.90th=[14091], 99.95th=[14091], 00:11:14.752 | 99.99th=[14222] 00:11:14.752 bw ( KiB/s): min=24056, max=24576, per=36.31%, avg=24316.00, stdev=367.70, samples=2 00:11:14.752 iops : min= 6014, max= 6144, avg=6079.00, stdev=91.92, samples=2 00:11:14.752 lat (msec) : 2=0.01%, 10=18.35%, 20=81.64% 00:11:14.752 cpu : usr=4.89%, sys=14.76%, ctx=309, majf=0, minf=11 00:11:14.752 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:14.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:14.753 issued rwts: total=5695,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.753 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:14.753 job1: (groupid=0, jobs=1): err= 0: pid=66481: Tue Nov 19 10:05:28 2024 00:11:14.753 read: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(10.0MiB/1013msec) 00:11:14.753 slat (usec): min=7, max=14750, avg=172.07, stdev=1117.67 00:11:14.753 clat (usec): min=9571, max=43940, avg=24688.09, stdev=3477.25 00:11:14.753 lat (usec): min=9587, max=49317, avg=24860.16, stdev=3429.35 00:11:14.753 clat percentiles (usec): 00:11:14.753 | 1.00th=[15270], 5.00th=[18220], 10.00th=[22152], 20.00th=[23987], 00:11:14.753 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24773], 60.00th=[24773], 00:11:14.753 | 70.00th=[25297], 80.00th=[25822], 90.00th=[26608], 95.00th=[27919], 00:11:14.753 | 99.00th=[42206], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:11:14.753 | 99.99th=[43779] 00:11:14.753 write: IOPS=2778, BW=10.9MiB/s (11.4MB/s)(11.0MiB/1013msec); 0 zone resets 00:11:14.753 slat (usec): min=6, max=26307, avg=193.45, stdev=1295.41 00:11:14.753 clat (usec): min=3086, max=40122, avg=23150.53, stdev=3556.03 00:11:14.753 lat (usec): min=13084, max=40148, avg=23343.98, stdev=3386.44 00:11:14.753 clat percentiles (usec): 00:11:14.753 | 1.00th=[13435], 5.00th=[19792], 10.00th=[20579], 20.00th=[21890], 00:11:14.753 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:11:14.753 | 70.00th=[23462], 80.00th=[23987], 90.00th=[24511], 95.00th=[28967], 00:11:14.753 | 99.00th=[39584], 99.50th=[39584], 99.90th=[40109], 99.95th=[40109], 00:11:14.753 | 99.99th=[40109] 00:11:14.753 bw ( KiB/s): min= 9216, max=12304, per=16.07%, avg=10760.00, stdev=2183.55, samples=2 00:11:14.753 iops : min= 2304, max= 3076, avg=2690.00, stdev=545.89, samples=2 00:11:14.753 lat (msec) : 4=0.02%, 10=0.09%, 20=7.29%, 50=92.60% 00:11:14.753 cpu : usr=3.06%, sys=7.61%, ctx=115, majf=0, minf=15 00:11:14.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:14.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:14.753 issued rwts: total=2560,2815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.753 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:14.753 job2: (groupid=0, jobs=1): err= 0: pid=66482: Tue Nov 19 10:05:28 2024 00:11:14.753 read: IOPS=5040, BW=19.7MiB/s (20.6MB/s)(19.8MiB/1006msec) 00:11:14.753 slat (usec): min=9, max=6169, avg=93.19, stdev=562.62 00:11:14.753 clat (usec): min=1297, max=21070, avg=12921.39, stdev=1665.55 00:11:14.753 lat (usec): min=5906, max=24795, avg=13014.58, stdev=1682.75 00:11:14.753 clat percentiles (usec): 00:11:14.753 | 1.00th=[ 6783], 5.00th=[ 9110], 10.00th=[11994], 20.00th=[12518], 00:11:14.753 | 30.00th=[12649], 40.00th=[12780], 50.00th=[13042], 60.00th=[13173], 00:11:14.753 | 70.00th=[13435], 80.00th=[13698], 90.00th=[13960], 95.00th=[14222], 00:11:14.753 | 99.00th=[20055], 99.50th=[20579], 99.90th=[21103], 99.95th=[21103], 00:11:14.753 | 99.99th=[21103] 00:11:14.753 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:11:14.753 slat (usec): min=5, max=10008, avg=95.90, stdev=590.89 00:11:14.753 clat (usec): min=6228, max=20642, avg=12091.27, stdev=1379.98 00:11:14.753 lat (usec): min=7094, max=20674, avg=12187.17, stdev=1293.77 00:11:14.753 clat percentiles (usec): 00:11:14.753 | 1.00th=[ 7242], 5.00th=[10421], 10.00th=[10945], 20.00th=[11469], 00:11:14.753 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[12387], 00:11:14.753 | 70.00th=[12518], 80.00th=[12649], 90.00th=[12911], 95.00th=[13173], 00:11:14.753 | 99.00th=[17695], 99.50th=[17957], 99.90th=[18220], 99.95th=[18220], 00:11:14.753 | 99.99th=[20579] 00:11:14.753 bw ( KiB/s): min=20480, max=20521, per=30.61%, avg=20500.50, stdev=28.99, samples=2 00:11:14.753 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:11:14.753 lat (msec) : 2=0.01%, 10=4.90%, 20=94.64%, 50=0.45% 00:11:14.753 cpu : usr=4.08%, sys=13.83%, ctx=220, majf=0, minf=10 00:11:14.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:14.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:14.753 issued rwts: total=5071,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.753 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:14.753 job3: (groupid=0, jobs=1): err= 0: pid=66483: Tue Nov 19 10:05:28 2024 00:11:14.753 read: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(10.0MiB/1013msec) 00:11:14.753 slat (usec): min=7, max=20261, avg=191.01, stdev=1464.56 00:11:14.753 clat (usec): min=15852, max=43780, avg=25445.23, stdev=2887.21 00:11:14.753 lat (usec): min=15862, max=46837, avg=25636.25, stdev=3151.06 00:11:14.753 clat percentiles (usec): 00:11:14.753 | 1.00th=[19530], 5.00th=[20579], 10.00th=[22938], 20.00th=[23987], 00:11:14.753 | 30.00th=[24511], 40.00th=[24773], 50.00th=[24773], 60.00th=[25035], 00:11:14.753 | 70.00th=[25560], 80.00th=[26346], 90.00th=[30016], 95.00th=[31327], 00:11:14.753 | 99.00th=[33424], 99.50th=[38011], 99.90th=[41157], 99.95th=[41157], 00:11:14.753 | 99.99th=[43779] 00:11:14.753 write: IOPS=2842, BW=11.1MiB/s (11.6MB/s)(11.2MiB/1013msec); 0 zone resets 00:11:14.753 slat (usec): min=10, max=14987, avg=172.79, stdev=1169.54 00:11:14.753 clat (usec): min=3055, max=29296, avg=21966.15, stdev=3420.65 00:11:14.753 lat (usec): min=10813, max=29335, avg=22138.94, stdev=3256.10 00:11:14.753 clat percentiles (usec): 00:11:14.753 | 1.00th=[11863], 5.00th=[13698], 10.00th=[16909], 20.00th=[20579], 00:11:14.753 | 30.00th=[22152], 40.00th=[22676], 50.00th=[22676], 60.00th=[23200], 00:11:14.753 | 70.00th=[23462], 80.00th=[23987], 90.00th=[24773], 95.00th=[27132], 00:11:14.753 | 99.00th=[28443], 99.50th=[28705], 99.90th=[28967], 99.95th=[29230], 00:11:14.753 | 99.99th=[29230] 00:11:14.753 bw ( KiB/s): min= 9728, max=12304, per=16.45%, avg=11016.00, stdev=1821.51, samples=2 00:11:14.753 iops : min= 2432, max= 3076, avg=2754.00, stdev=455.38, samples=2 00:11:14.753 lat (msec) : 4=0.02%, 20=10.96%, 50=89.02% 00:11:14.753 cpu : usr=2.77%, sys=7.61%, ctx=110, majf=0, minf=11 00:11:14.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:14.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:14.753 issued rwts: total=2560,2879,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.753 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:14.753 00:11:14.753 Run status group 0 (all jobs): 00:11:14.753 READ: bw=61.3MiB/s (64.2MB/s), 9.87MiB/s-22.2MiB/s (10.4MB/s-23.2MB/s), io=62.1MiB (65.1MB), run=1004-1013msec 00:11:14.753 WRITE: bw=65.4MiB/s (68.6MB/s), 10.9MiB/s-23.9MiB/s (11.4MB/s-25.1MB/s), io=66.2MiB (69.5MB), run=1004-1013msec 00:11:14.753 00:11:14.753 Disk stats (read/write): 00:11:14.753 nvme0n1: ios=4990/5120, merge=0/0, ticks=52345/47988, in_queue=100333, util=87.86% 00:11:14.753 nvme0n2: ios=2094/2432, merge=0/0, ticks=48614/54006, in_queue=102620, util=88.72% 00:11:14.753 nvme0n3: ios=4096/4476, merge=0/0, ticks=50376/50234, in_queue=100610, util=88.98% 00:11:14.753 nvme0n4: ios=2048/2496, merge=0/0, ticks=49812/52507, in_queue=102319, util=89.12% 00:11:14.753 10:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:14.753 10:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66496 00:11:14.753 10:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:14.753 10:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:14.753 [global] 00:11:14.753 thread=1 00:11:14.753 invalidate=1 00:11:14.753 rw=read 00:11:14.753 time_based=1 00:11:14.753 runtime=10 00:11:14.753 ioengine=libaio 00:11:14.753 direct=1 00:11:14.753 bs=4096 00:11:14.753 iodepth=1 00:11:14.753 norandommap=1 00:11:14.753 numjobs=1 00:11:14.753 00:11:14.753 [job0] 00:11:14.753 filename=/dev/nvme0n1 00:11:14.753 [job1] 00:11:14.753 filename=/dev/nvme0n2 00:11:14.753 [job2] 00:11:14.753 filename=/dev/nvme0n3 00:11:14.753 [job3] 00:11:14.753 filename=/dev/nvme0n4 00:11:14.753 Could not set queue depth (nvme0n1) 00:11:14.753 Could not set queue depth (nvme0n2) 00:11:14.753 Could not set queue depth (nvme0n3) 00:11:14.753 Could not set queue depth (nvme0n4) 00:11:14.753 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.753 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.753 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.753 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.753 fio-3.35 00:11:14.753 Starting 4 threads 00:11:18.036 10:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:18.036 fio: pid=66539, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:18.036 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=63406080, buflen=4096 00:11:18.036 10:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:18.294 fio: pid=66538, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:18.294 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=44564480, buflen=4096 00:11:18.294 10:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:18.294 10:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:18.553 fio: pid=66536, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:18.553 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=52584448, buflen=4096 00:11:18.553 10:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:18.553 10:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:18.811 fio: pid=66537, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:18.811 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=57327616, buflen=4096 00:11:18.811 00:11:18.811 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66536: Tue Nov 19 10:05:32 2024 00:11:18.811 read: IOPS=3662, BW=14.3MiB/s (15.0MB/s)(50.1MiB/3506msec) 00:11:18.811 slat (usec): min=7, max=10121, avg=15.11, stdev=167.94 00:11:18.811 clat (usec): min=104, max=2897, avg=256.72, stdev=74.00 00:11:18.811 lat (usec): min=136, max=10306, avg=271.83, stdev=182.94 00:11:18.811 clat percentiles (usec): 00:11:18.811 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 165], 00:11:18.811 | 30.00th=[ 233], 40.00th=[ 245], 50.00th=[ 277], 60.00th=[ 293], 00:11:18.811 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[ 334], 00:11:18.811 | 99.00th=[ 388], 99.50th=[ 416], 99.90th=[ 603], 99.95th=[ 676], 00:11:18.811 | 99.99th=[ 2147] 00:11:18.811 bw ( KiB/s): min=12360, max=15376, per=23.84%, avg=13392.17, stdev=1329.51, samples=6 00:11:18.811 iops : min= 3090, max= 3844, avg=3348.00, stdev=332.39, samples=6 00:11:18.811 lat (usec) : 250=43.76%, 500=56.08%, 750=0.12% 00:11:18.811 lat (msec) : 2=0.02%, 4=0.02% 00:11:18.811 cpu : usr=0.94%, sys=4.48%, ctx=12855, majf=0, minf=1 00:11:18.811 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.811 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.811 issued rwts: total=12839,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.811 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.811 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66537: Tue Nov 19 10:05:32 2024 00:11:18.811 read: IOPS=3695, BW=14.4MiB/s (15.1MB/s)(54.7MiB/3788msec) 00:11:18.811 slat (usec): min=7, max=17759, avg=17.93, stdev=294.19 00:11:18.811 clat (usec): min=125, max=10648, avg=251.53, stdev=114.30 00:11:18.811 lat (usec): min=138, max=17972, avg=269.47, stdev=315.17 00:11:18.811 clat percentiles (usec): 00:11:18.811 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 161], 00:11:18.811 | 30.00th=[ 229], 40.00th=[ 239], 50.00th=[ 253], 60.00th=[ 289], 00:11:18.811 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 334], 00:11:18.811 | 99.00th=[ 383], 99.50th=[ 420], 99.90th=[ 676], 99.95th=[ 955], 00:11:18.811 | 99.99th=[ 2180] 00:11:18.811 bw ( KiB/s): min=12360, max=17191, per=25.37%, avg=14251.00, stdev=1888.15, samples=7 00:11:18.811 iops : min= 3090, max= 4297, avg=3562.57, stdev=471.81, samples=7 00:11:18.811 lat (usec) : 250=48.62%, 500=51.21%, 750=0.09%, 1000=0.03% 00:11:18.811 lat (msec) : 2=0.02%, 4=0.01%, 20=0.01% 00:11:18.811 cpu : usr=1.14%, sys=4.25%, ctx=14007, majf=0, minf=2 00:11:18.811 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.811 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.811 issued rwts: total=13997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.811 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.811 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66538: Tue Nov 19 10:05:32 2024 00:11:18.811 read: IOPS=3351, BW=13.1MiB/s (13.7MB/s)(42.5MiB/3247msec) 00:11:18.811 slat (usec): min=7, max=7754, avg=11.33, stdev=100.53 00:11:18.811 clat (usec): min=144, max=8138, avg=286.21, stdev=119.76 00:11:18.811 lat (usec): min=156, max=8172, avg=297.54, stdev=156.15 00:11:18.811 clat percentiles (usec): 00:11:18.811 | 1.00th=[ 172], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 243], 00:11:18.811 | 30.00th=[ 251], 40.00th=[ 273], 50.00th=[ 289], 60.00th=[ 302], 00:11:18.812 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 330], 95.00th=[ 343], 00:11:18.812 | 99.00th=[ 396], 99.50th=[ 429], 99.90th=[ 676], 99.95th=[ 1860], 00:11:18.812 | 99.99th=[ 5932] 00:11:18.812 bw ( KiB/s): min=12048, max=15640, per=23.61%, avg=13262.67, stdev=1490.97, samples=6 00:11:18.812 iops : min= 3012, max= 3910, avg=3315.67, stdev=372.74, samples=6 00:11:18.812 lat (usec) : 250=29.05%, 500=70.76%, 750=0.10%, 1000=0.02% 00:11:18.812 lat (msec) : 2=0.02%, 4=0.02%, 10=0.03% 00:11:18.812 cpu : usr=0.71%, sys=3.20%, ctx=10885, majf=0, minf=1 00:11:18.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.812 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.812 issued rwts: total=10881,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.812 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66539: Tue Nov 19 10:05:32 2024 00:11:18.812 read: IOPS=5174, BW=20.2MiB/s (21.2MB/s)(60.5MiB/2992msec) 00:11:18.812 slat (nsec): min=7859, max=96482, avg=11890.09, stdev=2664.57 00:11:18.812 clat (usec): min=136, max=2072, avg=180.22, stdev=37.86 00:11:18.812 lat (usec): min=149, max=2086, avg=192.11, stdev=37.09 00:11:18.812 clat percentiles (usec): 00:11:18.812 | 1.00th=[ 147], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 155], 00:11:18.812 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:11:18.812 | 70.00th=[ 178], 80.00th=[ 225], 90.00th=[ 243], 95.00th=[ 251], 00:11:18.812 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 285], 99.95th=[ 310], 00:11:18.812 | 99.99th=[ 734] 00:11:18.812 bw ( KiB/s): min=18192, max=22848, per=38.62%, avg=21696.00, stdev=2010.95, samples=5 00:11:18.812 iops : min= 4548, max= 5712, avg=5424.00, stdev=502.74, samples=5 00:11:18.812 lat (usec) : 250=94.67%, 500=5.30%, 750=0.01% 00:11:18.812 lat (msec) : 4=0.01% 00:11:18.812 cpu : usr=1.60%, sys=5.35%, ctx=15491, majf=0, minf=1 00:11:18.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.812 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.812 issued rwts: total=15481,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.812 00:11:18.812 Run status group 0 (all jobs): 00:11:18.812 READ: bw=54.9MiB/s (57.5MB/s), 13.1MiB/s-20.2MiB/s (13.7MB/s-21.2MB/s), io=208MiB (218MB), run=2992-3788msec 00:11:18.812 00:11:18.812 Disk stats (read/write): 00:11:18.812 nvme0n1: ios=12001/0, merge=0/0, ticks=3101/0, in_queue=3101, util=95.22% 00:11:18.812 nvme0n2: ios=12943/0, merge=0/0, ticks=3267/0, in_queue=3267, util=94.59% 00:11:18.812 nvme0n3: ios=10365/0, merge=0/0, ticks=2764/0, in_queue=2764, util=96.21% 00:11:18.812 nvme0n4: ios=14990/0, merge=0/0, ticks=2618/0, in_queue=2618, util=96.76% 00:11:18.812 10:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:18.812 10:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:19.084 10:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:19.084 10:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:19.372 10:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:19.372 10:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:19.631 10:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:19.631 10:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:19.890 10:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:19.890 10:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:20.148 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:20.148 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66496 00:11:20.148 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:20.148 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.407 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:20.407 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:20.407 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:20.407 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.407 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.407 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:20.407 nvmf hotplug test: fio failed as expected 00:11:20.407 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:20.407 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:20.407 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:20.407 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:20.665 rmmod nvme_tcp 00:11:20.665 rmmod nvme_fabrics 00:11:20.665 rmmod nvme_keyring 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66110 ']' 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66110 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66110 ']' 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66110 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66110 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:20.665 killing process with pid 66110 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:20.665 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66110' 00:11:20.666 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66110 00:11:20.666 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66110 00:11:20.925 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:20.925 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:20.925 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:20.925 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:20.925 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:20.925 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:20.925 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:20.925 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:20.925 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:20.925 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:20.925 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:20.925 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:20.925 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:20.925 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:20.925 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:20.925 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:20.925 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:20.925 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:20.925 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:20.925 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:20.925 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:21.183 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:21.183 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:21.183 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.183 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.183 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.183 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:11:21.183 ************************************ 00:11:21.183 END TEST nvmf_fio_target 00:11:21.183 ************************************ 00:11:21.183 00:11:21.183 real 0m20.064s 00:11:21.183 user 1m16.353s 00:11:21.183 sys 0m9.440s 00:11:21.183 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.183 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.183 10:05:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:21.183 10:05:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:21.183 10:05:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.183 10:05:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:21.183 ************************************ 00:11:21.183 START TEST nvmf_bdevio 00:11:21.183 ************************************ 00:11:21.183 10:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:21.183 * Looking for test storage... 00:11:21.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:21.184 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:21.184 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:21.184 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:21.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.443 --rc genhtml_branch_coverage=1 00:11:21.443 --rc genhtml_function_coverage=1 00:11:21.443 --rc genhtml_legend=1 00:11:21.443 --rc geninfo_all_blocks=1 00:11:21.443 --rc geninfo_unexecuted_blocks=1 00:11:21.443 00:11:21.443 ' 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:21.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.443 --rc genhtml_branch_coverage=1 00:11:21.443 --rc genhtml_function_coverage=1 00:11:21.443 --rc genhtml_legend=1 00:11:21.443 --rc geninfo_all_blocks=1 00:11:21.443 --rc geninfo_unexecuted_blocks=1 00:11:21.443 00:11:21.443 ' 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:21.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.443 --rc genhtml_branch_coverage=1 00:11:21.443 --rc genhtml_function_coverage=1 00:11:21.443 --rc genhtml_legend=1 00:11:21.443 --rc geninfo_all_blocks=1 00:11:21.443 --rc geninfo_unexecuted_blocks=1 00:11:21.443 00:11:21.443 ' 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:21.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.443 --rc genhtml_branch_coverage=1 00:11:21.443 --rc genhtml_function_coverage=1 00:11:21.443 --rc genhtml_legend=1 00:11:21.443 --rc geninfo_all_blocks=1 00:11:21.443 --rc geninfo_unexecuted_blocks=1 00:11:21.443 00:11:21.443 ' 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.443 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:21.444 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:21.444 Cannot find device "nvmf_init_br" 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:21.444 Cannot find device "nvmf_init_br2" 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:21.444 Cannot find device "nvmf_tgt_br" 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:21.444 Cannot find device "nvmf_tgt_br2" 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:21.444 Cannot find device "nvmf_init_br" 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:21.444 Cannot find device "nvmf_init_br2" 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:21.444 Cannot find device "nvmf_tgt_br" 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:21.444 Cannot find device "nvmf_tgt_br2" 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:21.444 Cannot find device "nvmf_br" 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:21.444 Cannot find device "nvmf_init_if" 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:21.444 Cannot find device "nvmf_init_if2" 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:21.444 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:21.444 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:11:21.444 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:21.703 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:21.703 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:11:21.703 00:11:21.703 --- 10.0.0.3 ping statistics --- 00:11:21.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.703 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:21.703 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:21.703 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:11:21.703 00:11:21.703 --- 10.0.0.4 ping statistics --- 00:11:21.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.703 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:21.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:11:21.703 00:11:21.703 --- 10.0.0.1 ping statistics --- 00:11:21.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.703 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:21.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:11:21.703 00:11:21.703 --- 10.0.0.2 ping statistics --- 00:11:21.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.703 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:21.703 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:21.962 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:21.962 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:21.962 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:21.962 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.962 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66861 00:11:21.962 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:21.962 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66861 00:11:21.962 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 66861 ']' 00:11:21.962 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.962 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.962 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.962 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.962 10:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.962 [2024-11-19 10:05:35.679401] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:11:21.962 [2024-11-19 10:05:35.679721] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.962 [2024-11-19 10:05:35.823466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:22.221 [2024-11-19 10:05:35.886992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.221 [2024-11-19 10:05:35.887050] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.221 [2024-11-19 10:05:35.887062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.221 [2024-11-19 10:05:35.887070] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.221 [2024-11-19 10:05:35.887078] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.221 [2024-11-19 10:05:35.888244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:22.221 [2024-11-19 10:05:35.888359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:22.221 [2024-11-19 10:05:35.888501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:22.221 [2024-11-19 10:05:35.888504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:22.221 [2024-11-19 10:05:35.942317] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:22.221 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.221 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:22.221 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:22.221 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:22.221 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.221 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.221 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:22.221 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.221 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.221 [2024-11-19 10:05:36.049745] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.221 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.221 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:22.221 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.221 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.221 Malloc0 00:11:22.221 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.221 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:22.221 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.221 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.221 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.221 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:22.221 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.221 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.480 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.480 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:22.480 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.480 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.480 [2024-11-19 10:05:36.118941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:22.480 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.480 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:22.480 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:22.480 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:22.480 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:22.480 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:22.480 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:22.480 { 00:11:22.480 "params": { 00:11:22.480 "name": "Nvme$subsystem", 00:11:22.480 "trtype": "$TEST_TRANSPORT", 00:11:22.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:22.480 "adrfam": "ipv4", 00:11:22.480 "trsvcid": "$NVMF_PORT", 00:11:22.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:22.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:22.480 "hdgst": ${hdgst:-false}, 00:11:22.480 "ddgst": ${ddgst:-false} 00:11:22.480 }, 00:11:22.480 "method": "bdev_nvme_attach_controller" 00:11:22.480 } 00:11:22.480 EOF 00:11:22.480 )") 00:11:22.480 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:22.480 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:22.480 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:22.480 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:22.480 "params": { 00:11:22.480 "name": "Nvme1", 00:11:22.480 "trtype": "tcp", 00:11:22.480 "traddr": "10.0.0.3", 00:11:22.480 "adrfam": "ipv4", 00:11:22.480 "trsvcid": "4420", 00:11:22.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:22.481 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:22.481 "hdgst": false, 00:11:22.481 "ddgst": false 00:11:22.481 }, 00:11:22.481 "method": "bdev_nvme_attach_controller" 00:11:22.481 }' 00:11:22.481 [2024-11-19 10:05:36.175962] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:11:22.481 [2024-11-19 10:05:36.176673] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66890 ] 00:11:22.481 [2024-11-19 10:05:36.323679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:22.739 [2024-11-19 10:05:36.393583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.739 [2024-11-19 10:05:36.393735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.739 [2024-11-19 10:05:36.393740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.739 [2024-11-19 10:05:36.457375] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:22.739 I/O targets: 00:11:22.739 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:22.739 00:11:22.739 00:11:22.739 CUnit - A unit testing framework for C - Version 2.1-3 00:11:22.739 http://cunit.sourceforge.net/ 00:11:22.739 00:11:22.739 00:11:22.739 Suite: bdevio tests on: Nvme1n1 00:11:22.739 Test: blockdev write read block ...passed 00:11:22.739 Test: blockdev write zeroes read block ...passed 00:11:22.739 Test: blockdev write zeroes read no split ...passed 00:11:22.739 Test: blockdev write zeroes read split ...passed 00:11:22.739 Test: blockdev write zeroes read split partial ...passed 00:11:22.739 Test: blockdev reset ...[2024-11-19 10:05:36.605406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:22.739 [2024-11-19 10:05:36.605933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fa180 (9): Bad file descriptor 00:11:22.739 [2024-11-19 10:05:36.621377] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:22.739 passed 00:11:22.739 Test: blockdev write read 8 blocks ...passed 00:11:22.739 Test: blockdev write read size > 128k ...passed 00:11:22.739 Test: blockdev write read invalid size ...passed 00:11:22.739 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.739 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.739 Test: blockdev write read max offset ...passed 00:11:22.739 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.739 Test: blockdev writev readv 8 blocks ...passed 00:11:22.739 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.739 Test: blockdev writev readv block ...passed 00:11:22.998 Test: blockdev writev readv size > 128k ...passed 00:11:22.998 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.998 Test: blockdev comparev and writev ...[2024-11-19 10:05:36.632237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.998 [2024-11-19 10:05:36.632577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:22.998 [2024-11-19 10:05:36.632608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.998 [2024-11-19 10:05:36.632620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:22.998 [2024-11-19 10:05:36.632966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.998 [2024-11-19 10:05:36.632990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:22.998 [2024-11-19 10:05:36.633008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.998 [2024-11-19 10:05:36.633019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:22.998 [2024-11-19 10:05:36.633309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.998 [2024-11-19 10:05:36.633332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:22.998 [2024-11-19 10:05:36.633349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.998 [2024-11-19 10:05:36.633360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:22.998 [2024-11-19 10:05:36.633638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.998 [2024-11-19 10:05:36.633660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:22.998 [2024-11-19 10:05:36.633677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.998 [2024-11-19 10:05:36.633687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:22.998 passed 00:11:22.998 Test: blockdev nvme passthru rw ...passed 00:11:22.998 Test: blockdev nvme passthru vendor specific ...passed 00:11:22.998 Test: blockdev nvme admin passthru ...[2024-11-19 10:05:36.634949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.998 [2024-11-19 10:05:36.635154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:22.998 [2024-11-19 10:05:36.635281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.998 [2024-11-19 10:05:36.635304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:22.998 [2024-11-19 10:05:36.635420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.998 [2024-11-19 10:05:36.635441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:22.998 [2024-11-19 10:05:36.635553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.998 [2024-11-19 10:05:36.635574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:22.998 passed 00:11:22.998 Test: blockdev copy ...passed 00:11:22.998 00:11:22.998 Run Summary: Type Total Ran Passed Failed Inactive 00:11:22.998 suites 1 1 n/a 0 0 00:11:22.998 tests 23 23 23 0 0 00:11:22.998 asserts 152 152 152 0 n/a 00:11:22.998 00:11:22.998 Elapsed time = 0.142 seconds 00:11:22.998 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:22.998 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.998 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.998 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.998 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:22.998 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:22.998 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:22.998 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:23.256 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:23.256 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:23.256 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:23.256 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:23.256 rmmod nvme_tcp 00:11:23.256 rmmod nvme_fabrics 00:11:23.256 rmmod nvme_keyring 00:11:23.256 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:23.256 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:23.256 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:23.256 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66861 ']' 00:11:23.256 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66861 00:11:23.256 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 66861 ']' 00:11:23.256 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 66861 00:11:23.256 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:23.256 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.256 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66861 00:11:23.256 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:23.256 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:23.256 killing process with pid 66861 00:11:23.256 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66861' 00:11:23.256 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 66861 00:11:23.256 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 66861 00:11:23.514 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:23.514 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:23.514 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:23.514 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:23.514 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:23.514 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:23.514 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:23.514 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:23.514 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:23.514 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:23.514 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:23.514 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:23.514 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:23.514 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:23.514 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:23.514 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:23.514 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:23.514 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:23.514 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:23.514 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:23.514 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:23.514 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:23.773 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:23.773 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.773 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.773 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.773 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:11:23.773 00:11:23.773 real 0m2.510s 00:11:23.773 user 0m6.637s 00:11:23.773 sys 0m0.869s 00:11:23.773 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.773 ************************************ 00:11:23.773 END TEST nvmf_bdevio 00:11:23.773 ************************************ 00:11:23.773 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.773 10:05:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:23.773 00:11:23.773 real 2m36.304s 00:11:23.773 user 6m50.667s 00:11:23.773 sys 0m53.137s 00:11:23.773 10:05:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.773 ************************************ 00:11:23.773 END TEST nvmf_target_core 00:11:23.773 10:05:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:23.773 ************************************ 00:11:23.773 10:05:37 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:23.773 10:05:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:23.773 10:05:37 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.773 10:05:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:23.773 ************************************ 00:11:23.773 START TEST nvmf_target_extra 00:11:23.773 ************************************ 00:11:23.773 10:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:23.773 * Looking for test storage... 00:11:23.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:23.773 10:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:23.773 10:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:23.773 10:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:24.033 10:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:24.033 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.033 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.033 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.033 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.033 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.033 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.033 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.033 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.033 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.033 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.033 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.033 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:24.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.034 --rc genhtml_branch_coverage=1 00:11:24.034 --rc genhtml_function_coverage=1 00:11:24.034 --rc genhtml_legend=1 00:11:24.034 --rc geninfo_all_blocks=1 00:11:24.034 --rc geninfo_unexecuted_blocks=1 00:11:24.034 00:11:24.034 ' 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:24.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.034 --rc genhtml_branch_coverage=1 00:11:24.034 --rc genhtml_function_coverage=1 00:11:24.034 --rc genhtml_legend=1 00:11:24.034 --rc geninfo_all_blocks=1 00:11:24.034 --rc geninfo_unexecuted_blocks=1 00:11:24.034 00:11:24.034 ' 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:24.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.034 --rc genhtml_branch_coverage=1 00:11:24.034 --rc genhtml_function_coverage=1 00:11:24.034 --rc genhtml_legend=1 00:11:24.034 --rc geninfo_all_blocks=1 00:11:24.034 --rc geninfo_unexecuted_blocks=1 00:11:24.034 00:11:24.034 ' 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:24.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.034 --rc genhtml_branch_coverage=1 00:11:24.034 --rc genhtml_function_coverage=1 00:11:24.034 --rc genhtml_legend=1 00:11:24.034 --rc geninfo_all_blocks=1 00:11:24.034 --rc geninfo_unexecuted_blocks=1 00:11:24.034 00:11:24.034 ' 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:24.034 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.034 10:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:24.035 ************************************ 00:11:24.035 START TEST nvmf_auth_target 00:11:24.035 ************************************ 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:24.035 * Looking for test storage... 00:11:24.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:24.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.035 --rc genhtml_branch_coverage=1 00:11:24.035 --rc genhtml_function_coverage=1 00:11:24.035 --rc genhtml_legend=1 00:11:24.035 --rc geninfo_all_blocks=1 00:11:24.035 --rc geninfo_unexecuted_blocks=1 00:11:24.035 00:11:24.035 ' 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:24.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.035 --rc genhtml_branch_coverage=1 00:11:24.035 --rc genhtml_function_coverage=1 00:11:24.035 --rc genhtml_legend=1 00:11:24.035 --rc geninfo_all_blocks=1 00:11:24.035 --rc geninfo_unexecuted_blocks=1 00:11:24.035 00:11:24.035 ' 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:24.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.035 --rc genhtml_branch_coverage=1 00:11:24.035 --rc genhtml_function_coverage=1 00:11:24.035 --rc genhtml_legend=1 00:11:24.035 --rc geninfo_all_blocks=1 00:11:24.035 --rc geninfo_unexecuted_blocks=1 00:11:24.035 00:11:24.035 ' 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:24.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.035 --rc genhtml_branch_coverage=1 00:11:24.035 --rc genhtml_function_coverage=1 00:11:24.035 --rc genhtml_legend=1 00:11:24.035 --rc geninfo_all_blocks=1 00:11:24.035 --rc geninfo_unexecuted_blocks=1 00:11:24.035 00:11:24.035 ' 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.035 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:24.296 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:24.296 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:24.296 Cannot find device "nvmf_init_br" 00:11:24.297 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:24.297 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:24.297 Cannot find device "nvmf_init_br2" 00:11:24.297 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:24.297 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:24.297 Cannot find device "nvmf_tgt_br" 00:11:24.297 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:11:24.297 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:24.297 Cannot find device "nvmf_tgt_br2" 00:11:24.297 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:11:24.297 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:24.297 Cannot find device "nvmf_init_br" 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:24.297 Cannot find device "nvmf_init_br2" 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:24.297 Cannot find device "nvmf_tgt_br" 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:24.297 Cannot find device "nvmf_tgt_br2" 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:24.297 Cannot find device "nvmf_br" 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:24.297 Cannot find device "nvmf_init_if" 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:24.297 Cannot find device "nvmf_init_if2" 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:24.297 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:24.297 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:24.297 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:24.565 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:24.565 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:11:24.565 00:11:24.565 --- 10.0.0.3 ping statistics --- 00:11:24.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.565 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:24.565 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:24.565 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:11:24.565 00:11:24.565 --- 10.0.0.4 ping statistics --- 00:11:24.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.565 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:24.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:11:24.565 00:11:24.565 --- 10.0.0.1 ping statistics --- 00:11:24.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.565 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:24.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:11:24.565 00:11:24.565 --- 10.0.0.2 ping statistics --- 00:11:24.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.565 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:24.565 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.566 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:24.566 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:24.566 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.566 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:24.566 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:24.566 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:11:24.566 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:24.566 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:24.566 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.566 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67176 00:11:24.566 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67176 00:11:24.566 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:24.566 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67176 ']' 00:11:24.566 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.566 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.566 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.566 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.566 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.942 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.942 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:25.942 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:25.942 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:25.942 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.942 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.942 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67208 00:11:25.942 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:25.942 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:25.942 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:11:25.942 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:25.942 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:25.942 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:25.942 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:11:25.942 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:25.942 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=95747203e4b22b54940ea39353371531e48dfb47b9291ea1 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.8bh 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 95747203e4b22b54940ea39353371531e48dfb47b9291ea1 0 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 95747203e4b22b54940ea39353371531e48dfb47b9291ea1 0 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=95747203e4b22b54940ea39353371531e48dfb47b9291ea1 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.8bh 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.8bh 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.8bh 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=598def41de56d3015e9baa00d0873da953c1ff1c941fc56646e368d53bf09474 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.szg 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 598def41de56d3015e9baa00d0873da953c1ff1c941fc56646e368d53bf09474 3 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 598def41de56d3015e9baa00d0873da953c1ff1c941fc56646e368d53bf09474 3 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=598def41de56d3015e9baa00d0873da953c1ff1c941fc56646e368d53bf09474 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.szg 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.szg 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.szg 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=30cf38f9b47a3c586ddb9192a77bd9d9 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.dYY 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 30cf38f9b47a3c586ddb9192a77bd9d9 1 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 30cf38f9b47a3c586ddb9192a77bd9d9 1 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=30cf38f9b47a3c586ddb9192a77bd9d9 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.dYY 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.dYY 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.dYY 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5a495c1bf80719b5dc296b62e86122fb8c0ca5dc95740023 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.79M 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5a495c1bf80719b5dc296b62e86122fb8c0ca5dc95740023 2 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5a495c1bf80719b5dc296b62e86122fb8c0ca5dc95740023 2 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5a495c1bf80719b5dc296b62e86122fb8c0ca5dc95740023 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.79M 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.79M 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.79M 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8279dbb949e7f11e5f40292543e78e84b08164a512c0c1b8 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.63k 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8279dbb949e7f11e5f40292543e78e84b08164a512c0c1b8 2 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8279dbb949e7f11e5f40292543e78e84b08164a512c0c1b8 2 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8279dbb949e7f11e5f40292543e78e84b08164a512c0c1b8 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.63k 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.63k 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.63k 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:25.943 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6222baa4940f8eb6eb722175aa60c47e 00:11:25.944 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:25.944 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.JVe 00:11:25.944 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6222baa4940f8eb6eb722175aa60c47e 1 00:11:25.944 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6222baa4940f8eb6eb722175aa60c47e 1 00:11:25.944 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:25.944 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:25.944 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6222baa4940f8eb6eb722175aa60c47e 00:11:25.944 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:25.944 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.JVe 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.JVe 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.JVe 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b2652845748ca276d3cfc179166a69b9808da28fad16e86714e3f4cd2005095a 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.MGW 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b2652845748ca276d3cfc179166a69b9808da28fad16e86714e3f4cd2005095a 3 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b2652845748ca276d3cfc179166a69b9808da28fad16e86714e3f4cd2005095a 3 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b2652845748ca276d3cfc179166a69b9808da28fad16e86714e3f4cd2005095a 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.MGW 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.MGW 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.MGW 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67176 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67176 ']' 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.202 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.461 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.461 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:26.461 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67208 /var/tmp/host.sock 00:11:26.461 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67208 ']' 00:11:26.461 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:11:26.461 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:26.461 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:26.461 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.461 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.719 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.719 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:26.719 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:11:26.719 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.719 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.719 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.719 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:26.719 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8bh 00:11:26.719 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.719 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.719 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.720 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.8bh 00:11:26.720 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.8bh 00:11:26.978 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.szg ]] 00:11:26.978 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.szg 00:11:26.978 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.978 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.978 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.978 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.szg 00:11:26.979 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.szg 00:11:27.237 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:27.237 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dYY 00:11:27.237 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.237 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.237 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.237 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.dYY 00:11:27.237 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.dYY 00:11:27.805 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.79M ]] 00:11:27.805 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.79M 00:11:27.805 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.805 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.805 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.805 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.79M 00:11:27.805 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.79M 00:11:28.064 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:28.064 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.63k 00:11:28.064 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.064 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.064 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.064 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.63k 00:11:28.064 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.63k 00:11:28.323 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.JVe ]] 00:11:28.323 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JVe 00:11:28.323 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.323 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.323 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.323 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JVe 00:11:28.323 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JVe 00:11:28.582 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:28.582 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.MGW 00:11:28.582 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.582 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.582 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.582 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.MGW 00:11:28.582 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.MGW 00:11:28.841 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:11:28.841 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:28.841 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:28.841 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:28.841 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:28.841 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:29.103 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:11:29.103 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:29.103 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:29.103 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:29.103 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:29.103 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.103 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.103 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.103 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.103 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.103 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.103 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.103 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.365 00:11:29.365 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:29.365 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:29.365 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.627 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.627 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.627 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.627 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.627 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.627 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.627 { 00:11:29.627 "cntlid": 1, 00:11:29.627 "qid": 0, 00:11:29.627 "state": "enabled", 00:11:29.627 "thread": "nvmf_tgt_poll_group_000", 00:11:29.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:11:29.627 "listen_address": { 00:11:29.627 "trtype": "TCP", 00:11:29.627 "adrfam": "IPv4", 00:11:29.627 "traddr": "10.0.0.3", 00:11:29.627 "trsvcid": "4420" 00:11:29.627 }, 00:11:29.627 "peer_address": { 00:11:29.627 "trtype": "TCP", 00:11:29.627 "adrfam": "IPv4", 00:11:29.627 "traddr": "10.0.0.1", 00:11:29.627 "trsvcid": "47536" 00:11:29.627 }, 00:11:29.627 "auth": { 00:11:29.627 "state": "completed", 00:11:29.627 "digest": "sha256", 00:11:29.627 "dhgroup": "null" 00:11:29.627 } 00:11:29.627 } 00:11:29.627 ]' 00:11:29.627 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.628 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:29.628 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.887 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:29.887 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.887 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.887 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.887 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.145 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:11:30.145 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:11:35.419 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.419 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:11:35.419 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.419 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.419 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.419 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:35.420 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:35.420 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:35.420 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:11:35.420 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:35.420 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:35.420 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:35.420 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:35.420 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.420 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:35.420 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.420 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.420 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.421 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:35.421 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:35.421 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:35.421 00:11:35.421 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:35.421 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.421 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.681 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.681 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.681 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.681 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.681 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.681 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:35.681 { 00:11:35.681 "cntlid": 3, 00:11:35.681 "qid": 0, 00:11:35.681 "state": "enabled", 00:11:35.681 "thread": "nvmf_tgt_poll_group_000", 00:11:35.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:11:35.681 "listen_address": { 00:11:35.681 "trtype": "TCP", 00:11:35.681 "adrfam": "IPv4", 00:11:35.681 "traddr": "10.0.0.3", 00:11:35.681 "trsvcid": "4420" 00:11:35.681 }, 00:11:35.681 "peer_address": { 00:11:35.681 "trtype": "TCP", 00:11:35.681 "adrfam": "IPv4", 00:11:35.681 "traddr": "10.0.0.1", 00:11:35.681 "trsvcid": "47562" 00:11:35.681 }, 00:11:35.681 "auth": { 00:11:35.681 "state": "completed", 00:11:35.681 "digest": "sha256", 00:11:35.681 "dhgroup": "null" 00:11:35.681 } 00:11:35.681 } 00:11:35.681 ]' 00:11:35.681 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:35.681 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:35.681 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:35.681 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:35.681 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:35.681 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.681 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.681 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.940 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:11:35.940 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:11:36.877 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.877 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:11:36.877 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.877 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.877 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.878 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:36.878 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:36.878 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:37.137 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:11:37.137 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.137 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:37.137 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:37.137 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:37.137 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.137 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:37.137 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.137 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.137 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.137 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:37.137 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:37.137 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:37.396 00:11:37.396 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:37.396 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.396 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:37.655 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.655 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.655 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.655 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.655 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.655 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:37.655 { 00:11:37.655 "cntlid": 5, 00:11:37.655 "qid": 0, 00:11:37.655 "state": "enabled", 00:11:37.655 "thread": "nvmf_tgt_poll_group_000", 00:11:37.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:11:37.655 "listen_address": { 00:11:37.655 "trtype": "TCP", 00:11:37.655 "adrfam": "IPv4", 00:11:37.655 "traddr": "10.0.0.3", 00:11:37.655 "trsvcid": "4420" 00:11:37.655 }, 00:11:37.655 "peer_address": { 00:11:37.655 "trtype": "TCP", 00:11:37.655 "adrfam": "IPv4", 00:11:37.655 "traddr": "10.0.0.1", 00:11:37.655 "trsvcid": "47588" 00:11:37.655 }, 00:11:37.655 "auth": { 00:11:37.655 "state": "completed", 00:11:37.655 "digest": "sha256", 00:11:37.655 "dhgroup": "null" 00:11:37.655 } 00:11:37.655 } 00:11:37.655 ]' 00:11:37.655 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:37.915 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:37.915 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:37.915 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:37.915 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:37.915 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.915 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.915 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.175 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:11:38.175 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:11:39.112 10:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.112 10:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:11:39.112 10:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.112 10:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.112 10:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.112 10:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.112 10:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:39.112 10:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:39.112 10:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:11:39.112 10:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.112 10:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:39.112 10:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:39.112 10:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:39.112 10:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.112 10:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key3 00:11:39.112 10:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.112 10:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.112 10:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.112 10:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:39.112 10:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:39.112 10:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:39.371 00:11:39.630 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:39.630 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:39.630 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.889 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.889 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.889 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.889 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.889 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.889 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:39.889 { 00:11:39.889 "cntlid": 7, 00:11:39.889 "qid": 0, 00:11:39.889 "state": "enabled", 00:11:39.889 "thread": "nvmf_tgt_poll_group_000", 00:11:39.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:11:39.889 "listen_address": { 00:11:39.889 "trtype": "TCP", 00:11:39.889 "adrfam": "IPv4", 00:11:39.889 "traddr": "10.0.0.3", 00:11:39.889 "trsvcid": "4420" 00:11:39.889 }, 00:11:39.889 "peer_address": { 00:11:39.889 "trtype": "TCP", 00:11:39.889 "adrfam": "IPv4", 00:11:39.889 "traddr": "10.0.0.1", 00:11:39.889 "trsvcid": "45834" 00:11:39.889 }, 00:11:39.889 "auth": { 00:11:39.889 "state": "completed", 00:11:39.889 "digest": "sha256", 00:11:39.889 "dhgroup": "null" 00:11:39.889 } 00:11:39.889 } 00:11:39.889 ]' 00:11:39.889 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:39.889 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:39.889 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:39.889 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:39.889 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:39.889 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.889 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.889 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.148 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:11:40.148 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:11:41.132 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.132 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:11:41.132 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.132 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.132 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.132 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:41.132 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.132 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:41.132 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:41.132 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:11:41.132 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.132 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:41.132 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:41.132 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:41.132 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.132 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.132 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.132 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.132 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.132 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.132 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.132 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.700 00:11:41.700 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:41.700 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:41.700 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.700 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.700 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.700 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.700 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.959 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.959 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:41.959 { 00:11:41.959 "cntlid": 9, 00:11:41.959 "qid": 0, 00:11:41.959 "state": "enabled", 00:11:41.959 "thread": "nvmf_tgt_poll_group_000", 00:11:41.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:11:41.959 "listen_address": { 00:11:41.959 "trtype": "TCP", 00:11:41.959 "adrfam": "IPv4", 00:11:41.959 "traddr": "10.0.0.3", 00:11:41.959 "trsvcid": "4420" 00:11:41.959 }, 00:11:41.959 "peer_address": { 00:11:41.959 "trtype": "TCP", 00:11:41.959 "adrfam": "IPv4", 00:11:41.959 "traddr": "10.0.0.1", 00:11:41.959 "trsvcid": "45870" 00:11:41.959 }, 00:11:41.959 "auth": { 00:11:41.959 "state": "completed", 00:11:41.959 "digest": "sha256", 00:11:41.959 "dhgroup": "ffdhe2048" 00:11:41.959 } 00:11:41.959 } 00:11:41.959 ]' 00:11:41.959 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:41.959 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:41.959 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:41.959 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:41.959 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:41.959 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.959 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.959 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.218 10:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:11:42.218 10:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:11:43.155 10:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.155 10:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:11:43.155 10:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.155 10:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.155 10:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.155 10:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.155 10:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:43.155 10:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:43.413 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:11:43.413 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.413 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:43.413 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:43.413 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:43.414 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.414 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.414 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.414 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.414 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.414 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.414 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.414 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.672 00:11:43.672 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:43.672 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.672 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.239 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.239 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.239 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.240 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.240 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.240 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.240 { 00:11:44.240 "cntlid": 11, 00:11:44.240 "qid": 0, 00:11:44.240 "state": "enabled", 00:11:44.240 "thread": "nvmf_tgt_poll_group_000", 00:11:44.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:11:44.240 "listen_address": { 00:11:44.240 "trtype": "TCP", 00:11:44.240 "adrfam": "IPv4", 00:11:44.240 "traddr": "10.0.0.3", 00:11:44.240 "trsvcid": "4420" 00:11:44.240 }, 00:11:44.240 "peer_address": { 00:11:44.240 "trtype": "TCP", 00:11:44.240 "adrfam": "IPv4", 00:11:44.240 "traddr": "10.0.0.1", 00:11:44.240 "trsvcid": "45892" 00:11:44.240 }, 00:11:44.240 "auth": { 00:11:44.240 "state": "completed", 00:11:44.240 "digest": "sha256", 00:11:44.240 "dhgroup": "ffdhe2048" 00:11:44.240 } 00:11:44.240 } 00:11:44.240 ]' 00:11:44.240 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.240 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:44.240 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.240 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:44.240 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.240 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.240 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.240 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.498 10:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:11:44.498 10:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:11:45.432 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.432 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:11:45.432 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.432 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.432 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.432 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.432 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:45.432 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:45.690 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:11:45.690 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:45.690 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:45.690 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:45.690 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:45.690 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.690 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.690 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.690 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.690 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.690 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.690 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.690 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.948 00:11:45.948 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:45.948 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:45.948 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.208 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.208 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.208 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.208 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.208 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.208 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.208 { 00:11:46.208 "cntlid": 13, 00:11:46.208 "qid": 0, 00:11:46.208 "state": "enabled", 00:11:46.208 "thread": "nvmf_tgt_poll_group_000", 00:11:46.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:11:46.208 "listen_address": { 00:11:46.208 "trtype": "TCP", 00:11:46.208 "adrfam": "IPv4", 00:11:46.208 "traddr": "10.0.0.3", 00:11:46.208 "trsvcid": "4420" 00:11:46.208 }, 00:11:46.208 "peer_address": { 00:11:46.208 "trtype": "TCP", 00:11:46.208 "adrfam": "IPv4", 00:11:46.208 "traddr": "10.0.0.1", 00:11:46.208 "trsvcid": "45920" 00:11:46.208 }, 00:11:46.208 "auth": { 00:11:46.208 "state": "completed", 00:11:46.208 "digest": "sha256", 00:11:46.208 "dhgroup": "ffdhe2048" 00:11:46.208 } 00:11:46.208 } 00:11:46.208 ]' 00:11:46.208 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.467 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:46.467 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.467 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:46.467 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.467 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.467 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.467 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.725 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:11:46.725 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:11:47.292 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.292 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:11:47.292 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.292 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.292 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.292 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.292 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:47.292 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:47.859 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:11:47.859 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.859 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:47.859 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:47.859 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:47.859 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.859 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key3 00:11:47.859 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.859 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.859 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.859 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:47.859 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:47.859 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:48.118 00:11:48.118 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.118 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.118 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.376 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.376 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.376 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.376 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.376 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.376 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.376 { 00:11:48.376 "cntlid": 15, 00:11:48.376 "qid": 0, 00:11:48.376 "state": "enabled", 00:11:48.376 "thread": "nvmf_tgt_poll_group_000", 00:11:48.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:11:48.376 "listen_address": { 00:11:48.376 "trtype": "TCP", 00:11:48.376 "adrfam": "IPv4", 00:11:48.376 "traddr": "10.0.0.3", 00:11:48.376 "trsvcid": "4420" 00:11:48.376 }, 00:11:48.376 "peer_address": { 00:11:48.376 "trtype": "TCP", 00:11:48.376 "adrfam": "IPv4", 00:11:48.376 "traddr": "10.0.0.1", 00:11:48.376 "trsvcid": "42020" 00:11:48.376 }, 00:11:48.376 "auth": { 00:11:48.376 "state": "completed", 00:11:48.376 "digest": "sha256", 00:11:48.376 "dhgroup": "ffdhe2048" 00:11:48.376 } 00:11:48.376 } 00:11:48.376 ]' 00:11:48.376 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.636 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:48.636 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:48.636 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:48.636 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:48.636 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.636 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.636 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.894 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:11:48.894 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:11:49.829 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.830 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:11:49.830 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.830 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.830 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.830 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:49.830 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:49.830 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:49.830 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:50.089 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:11:50.089 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:50.089 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:50.089 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:50.089 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:50.089 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.089 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.089 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.089 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.089 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.089 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.089 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.089 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.348 00:11:50.348 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.348 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.348 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.607 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.607 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.607 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.607 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.607 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.607 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.607 { 00:11:50.607 "cntlid": 17, 00:11:50.607 "qid": 0, 00:11:50.607 "state": "enabled", 00:11:50.607 "thread": "nvmf_tgt_poll_group_000", 00:11:50.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:11:50.607 "listen_address": { 00:11:50.607 "trtype": "TCP", 00:11:50.607 "adrfam": "IPv4", 00:11:50.607 "traddr": "10.0.0.3", 00:11:50.607 "trsvcid": "4420" 00:11:50.607 }, 00:11:50.607 "peer_address": { 00:11:50.607 "trtype": "TCP", 00:11:50.607 "adrfam": "IPv4", 00:11:50.607 "traddr": "10.0.0.1", 00:11:50.607 "trsvcid": "42050" 00:11:50.607 }, 00:11:50.607 "auth": { 00:11:50.607 "state": "completed", 00:11:50.607 "digest": "sha256", 00:11:50.607 "dhgroup": "ffdhe3072" 00:11:50.607 } 00:11:50.607 } 00:11:50.607 ]' 00:11:50.607 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.867 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:50.867 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.867 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:50.867 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:50.867 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.867 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.867 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.127 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:11:51.127 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:11:52.063 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.063 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:11:52.063 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.063 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.063 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.063 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:52.063 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:52.063 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:52.063 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:11:52.063 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.063 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:52.063 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:52.063 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:52.063 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.063 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.063 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.063 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.063 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.063 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.063 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.063 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.630 00:11:52.630 10:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:52.630 10:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:52.630 10:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.890 10:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.890 10:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.890 10:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.890 10:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.890 10:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.890 10:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.890 { 00:11:52.890 "cntlid": 19, 00:11:52.890 "qid": 0, 00:11:52.890 "state": "enabled", 00:11:52.890 "thread": "nvmf_tgt_poll_group_000", 00:11:52.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:11:52.890 "listen_address": { 00:11:52.890 "trtype": "TCP", 00:11:52.890 "adrfam": "IPv4", 00:11:52.890 "traddr": "10.0.0.3", 00:11:52.890 "trsvcid": "4420" 00:11:52.890 }, 00:11:52.890 "peer_address": { 00:11:52.890 "trtype": "TCP", 00:11:52.890 "adrfam": "IPv4", 00:11:52.890 "traddr": "10.0.0.1", 00:11:52.890 "trsvcid": "42076" 00:11:52.890 }, 00:11:52.890 "auth": { 00:11:52.890 "state": "completed", 00:11:52.890 "digest": "sha256", 00:11:52.890 "dhgroup": "ffdhe3072" 00:11:52.890 } 00:11:52.890 } 00:11:52.890 ]' 00:11:52.890 10:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.890 10:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:52.890 10:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.890 10:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:52.890 10:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.890 10:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.890 10:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.890 10:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.457 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:11:53.458 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:11:54.025 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.025 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:11:54.025 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.025 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.025 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.025 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:54.025 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:54.025 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:54.285 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:11:54.285 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:54.285 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:54.285 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:54.285 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:54.285 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.285 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.285 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.285 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.285 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.285 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.285 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.285 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.853 00:11:54.853 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.853 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.853 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.112 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.112 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.112 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.112 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.112 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.112 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:55.112 { 00:11:55.112 "cntlid": 21, 00:11:55.112 "qid": 0, 00:11:55.112 "state": "enabled", 00:11:55.112 "thread": "nvmf_tgt_poll_group_000", 00:11:55.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:11:55.112 "listen_address": { 00:11:55.112 "trtype": "TCP", 00:11:55.112 "adrfam": "IPv4", 00:11:55.112 "traddr": "10.0.0.3", 00:11:55.112 "trsvcid": "4420" 00:11:55.112 }, 00:11:55.112 "peer_address": { 00:11:55.112 "trtype": "TCP", 00:11:55.112 "adrfam": "IPv4", 00:11:55.112 "traddr": "10.0.0.1", 00:11:55.112 "trsvcid": "42110" 00:11:55.112 }, 00:11:55.112 "auth": { 00:11:55.112 "state": "completed", 00:11:55.112 "digest": "sha256", 00:11:55.112 "dhgroup": "ffdhe3072" 00:11:55.112 } 00:11:55.112 } 00:11:55.112 ]' 00:11:55.112 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:55.112 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:55.112 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:55.112 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:55.112 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:55.112 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.113 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.113 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.372 10:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:11:55.372 10:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:11:56.309 10:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.309 10:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:11:56.309 10:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.309 10:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.309 10:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.309 10:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:56.309 10:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:56.309 10:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:56.569 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:11:56.569 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:56.569 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:56.570 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:56.570 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:56.570 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.570 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key3 00:11:56.570 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.570 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.570 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.570 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:56.570 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:56.570 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:56.829 00:11:56.829 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:56.829 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:56.829 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.397 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.397 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.397 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.397 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.397 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.397 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:57.397 { 00:11:57.397 "cntlid": 23, 00:11:57.397 "qid": 0, 00:11:57.397 "state": "enabled", 00:11:57.397 "thread": "nvmf_tgt_poll_group_000", 00:11:57.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:11:57.397 "listen_address": { 00:11:57.397 "trtype": "TCP", 00:11:57.397 "adrfam": "IPv4", 00:11:57.397 "traddr": "10.0.0.3", 00:11:57.397 "trsvcid": "4420" 00:11:57.397 }, 00:11:57.397 "peer_address": { 00:11:57.397 "trtype": "TCP", 00:11:57.397 "adrfam": "IPv4", 00:11:57.397 "traddr": "10.0.0.1", 00:11:57.397 "trsvcid": "42128" 00:11:57.397 }, 00:11:57.397 "auth": { 00:11:57.397 "state": "completed", 00:11:57.397 "digest": "sha256", 00:11:57.397 "dhgroup": "ffdhe3072" 00:11:57.397 } 00:11:57.397 } 00:11:57.397 ]' 00:11:57.397 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.397 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:57.397 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.397 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:57.397 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.397 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.397 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.397 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.657 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:11:57.657 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:11:58.595 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.595 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:11:58.595 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.595 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.595 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.595 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:58.595 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:58.595 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:58.595 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:58.854 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:11:58.854 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.854 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:58.854 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:58.854 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:58.854 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.854 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.854 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.854 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.854 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.854 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.854 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.854 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.113 00:11:59.113 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:59.113 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:59.113 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.682 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.682 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.682 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.682 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.682 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.682 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:59.682 { 00:11:59.682 "cntlid": 25, 00:11:59.682 "qid": 0, 00:11:59.682 "state": "enabled", 00:11:59.682 "thread": "nvmf_tgt_poll_group_000", 00:11:59.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:11:59.682 "listen_address": { 00:11:59.682 "trtype": "TCP", 00:11:59.682 "adrfam": "IPv4", 00:11:59.682 "traddr": "10.0.0.3", 00:11:59.682 "trsvcid": "4420" 00:11:59.682 }, 00:11:59.682 "peer_address": { 00:11:59.682 "trtype": "TCP", 00:11:59.682 "adrfam": "IPv4", 00:11:59.682 "traddr": "10.0.0.1", 00:11:59.682 "trsvcid": "38622" 00:11:59.682 }, 00:11:59.682 "auth": { 00:11:59.682 "state": "completed", 00:11:59.682 "digest": "sha256", 00:11:59.682 "dhgroup": "ffdhe4096" 00:11:59.682 } 00:11:59.682 } 00:11:59.682 ]' 00:11:59.682 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:59.682 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:59.682 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:59.682 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:59.682 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:59.682 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.682 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.682 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.941 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:11:59.941 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:12:00.877 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.877 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:00.877 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.877 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.877 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.877 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:00.877 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:00.877 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:00.877 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:12:00.877 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:00.877 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:00.877 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:00.877 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:00.877 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.877 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.877 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.877 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.877 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.877 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.877 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.877 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.445 00:12:01.445 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:01.445 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:01.445 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.704 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.704 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.704 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.704 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.704 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.704 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:01.704 { 00:12:01.704 "cntlid": 27, 00:12:01.704 "qid": 0, 00:12:01.704 "state": "enabled", 00:12:01.704 "thread": "nvmf_tgt_poll_group_000", 00:12:01.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:01.704 "listen_address": { 00:12:01.704 "trtype": "TCP", 00:12:01.704 "adrfam": "IPv4", 00:12:01.704 "traddr": "10.0.0.3", 00:12:01.704 "trsvcid": "4420" 00:12:01.704 }, 00:12:01.704 "peer_address": { 00:12:01.704 "trtype": "TCP", 00:12:01.704 "adrfam": "IPv4", 00:12:01.704 "traddr": "10.0.0.1", 00:12:01.704 "trsvcid": "38642" 00:12:01.704 }, 00:12:01.704 "auth": { 00:12:01.704 "state": "completed", 00:12:01.704 "digest": "sha256", 00:12:01.704 "dhgroup": "ffdhe4096" 00:12:01.704 } 00:12:01.704 } 00:12:01.704 ]' 00:12:01.704 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:01.963 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:01.963 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:01.963 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:01.963 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:01.963 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.963 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.963 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.221 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:12:02.221 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:12:03.157 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.157 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:03.157 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.157 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.157 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.157 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:03.157 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:03.157 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:03.157 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:12:03.157 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:03.157 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:03.157 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:03.157 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:03.157 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.157 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.157 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.157 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.157 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.157 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.157 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.157 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.726 00:12:03.726 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:03.726 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:03.726 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.985 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.985 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.985 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.985 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.985 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.985 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:03.985 { 00:12:03.985 "cntlid": 29, 00:12:03.985 "qid": 0, 00:12:03.985 "state": "enabled", 00:12:03.985 "thread": "nvmf_tgt_poll_group_000", 00:12:03.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:03.985 "listen_address": { 00:12:03.985 "trtype": "TCP", 00:12:03.985 "adrfam": "IPv4", 00:12:03.985 "traddr": "10.0.0.3", 00:12:03.985 "trsvcid": "4420" 00:12:03.985 }, 00:12:03.985 "peer_address": { 00:12:03.985 "trtype": "TCP", 00:12:03.985 "adrfam": "IPv4", 00:12:03.985 "traddr": "10.0.0.1", 00:12:03.985 "trsvcid": "38670" 00:12:03.985 }, 00:12:03.985 "auth": { 00:12:03.985 "state": "completed", 00:12:03.985 "digest": "sha256", 00:12:03.985 "dhgroup": "ffdhe4096" 00:12:03.985 } 00:12:03.985 } 00:12:03.985 ]' 00:12:03.985 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:03.985 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:03.985 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:03.985 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:03.985 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:03.985 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.985 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.985 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.554 10:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:12:04.554 10:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:12:05.120 10:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.120 10:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:05.120 10:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.120 10:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.120 10:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.120 10:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.120 10:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:05.120 10:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:05.386 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:12:05.386 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.386 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:05.386 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:05.386 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:05.386 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.386 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key3 00:12:05.386 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.386 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.386 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.386 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:05.386 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:05.386 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:05.953 00:12:05.953 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:05.953 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.953 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:06.211 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.211 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.211 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.211 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.211 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.211 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.211 { 00:12:06.211 "cntlid": 31, 00:12:06.211 "qid": 0, 00:12:06.211 "state": "enabled", 00:12:06.211 "thread": "nvmf_tgt_poll_group_000", 00:12:06.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:06.211 "listen_address": { 00:12:06.211 "trtype": "TCP", 00:12:06.211 "adrfam": "IPv4", 00:12:06.211 "traddr": "10.0.0.3", 00:12:06.211 "trsvcid": "4420" 00:12:06.211 }, 00:12:06.211 "peer_address": { 00:12:06.211 "trtype": "TCP", 00:12:06.211 "adrfam": "IPv4", 00:12:06.211 "traddr": "10.0.0.1", 00:12:06.211 "trsvcid": "38698" 00:12:06.211 }, 00:12:06.211 "auth": { 00:12:06.211 "state": "completed", 00:12:06.211 "digest": "sha256", 00:12:06.211 "dhgroup": "ffdhe4096" 00:12:06.211 } 00:12:06.211 } 00:12:06.211 ]' 00:12:06.211 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.211 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:06.211 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.469 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:06.469 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:06.469 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.469 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.469 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.728 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:12:06.728 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:12:07.294 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.294 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:07.294 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.294 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.294 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.294 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:07.294 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:07.294 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:07.295 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:07.862 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:12:07.862 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:07.862 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:07.862 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:07.862 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:07.862 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.862 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.862 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.862 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.862 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.862 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.862 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.862 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.121 00:12:08.121 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.121 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.121 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.688 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.688 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.688 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.688 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.688 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.688 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:08.688 { 00:12:08.688 "cntlid": 33, 00:12:08.688 "qid": 0, 00:12:08.688 "state": "enabled", 00:12:08.688 "thread": "nvmf_tgt_poll_group_000", 00:12:08.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:08.688 "listen_address": { 00:12:08.688 "trtype": "TCP", 00:12:08.688 "adrfam": "IPv4", 00:12:08.688 "traddr": "10.0.0.3", 00:12:08.688 "trsvcid": "4420" 00:12:08.688 }, 00:12:08.688 "peer_address": { 00:12:08.688 "trtype": "TCP", 00:12:08.688 "adrfam": "IPv4", 00:12:08.688 "traddr": "10.0.0.1", 00:12:08.688 "trsvcid": "58382" 00:12:08.688 }, 00:12:08.688 "auth": { 00:12:08.688 "state": "completed", 00:12:08.688 "digest": "sha256", 00:12:08.688 "dhgroup": "ffdhe6144" 00:12:08.688 } 00:12:08.688 } 00:12:08.688 ]' 00:12:08.688 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.688 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:08.688 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.688 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:08.688 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.688 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.688 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.688 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.947 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:12:08.947 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:12:09.883 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.883 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:09.883 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.883 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.883 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.883 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.883 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:09.883 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:10.142 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:12:10.142 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:10.142 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:10.142 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:10.142 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:10.142 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.142 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.142 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.142 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.142 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.142 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.142 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.142 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.401 00:12:10.401 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:10.401 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.401 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.969 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.969 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.969 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.969 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.969 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.969 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.969 { 00:12:10.969 "cntlid": 35, 00:12:10.969 "qid": 0, 00:12:10.969 "state": "enabled", 00:12:10.969 "thread": "nvmf_tgt_poll_group_000", 00:12:10.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:10.969 "listen_address": { 00:12:10.969 "trtype": "TCP", 00:12:10.969 "adrfam": "IPv4", 00:12:10.969 "traddr": "10.0.0.3", 00:12:10.969 "trsvcid": "4420" 00:12:10.969 }, 00:12:10.969 "peer_address": { 00:12:10.969 "trtype": "TCP", 00:12:10.969 "adrfam": "IPv4", 00:12:10.969 "traddr": "10.0.0.1", 00:12:10.969 "trsvcid": "58410" 00:12:10.969 }, 00:12:10.969 "auth": { 00:12:10.969 "state": "completed", 00:12:10.969 "digest": "sha256", 00:12:10.969 "dhgroup": "ffdhe6144" 00:12:10.969 } 00:12:10.969 } 00:12:10.969 ]' 00:12:10.969 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.969 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:10.969 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.969 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:10.969 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.969 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.969 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.969 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.228 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:12:11.228 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:12:11.795 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.795 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:11.795 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.795 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.055 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.055 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:12.055 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:12.055 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:12.314 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:12:12.314 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:12.314 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:12.314 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:12.314 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:12.314 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.314 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.314 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.314 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.314 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.314 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.314 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.314 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.881 00:12:12.881 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.881 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.881 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.140 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.140 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.140 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.140 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.140 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.140 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:13.140 { 00:12:13.140 "cntlid": 37, 00:12:13.140 "qid": 0, 00:12:13.140 "state": "enabled", 00:12:13.140 "thread": "nvmf_tgt_poll_group_000", 00:12:13.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:13.140 "listen_address": { 00:12:13.140 "trtype": "TCP", 00:12:13.140 "adrfam": "IPv4", 00:12:13.140 "traddr": "10.0.0.3", 00:12:13.140 "trsvcid": "4420" 00:12:13.140 }, 00:12:13.140 "peer_address": { 00:12:13.140 "trtype": "TCP", 00:12:13.140 "adrfam": "IPv4", 00:12:13.140 "traddr": "10.0.0.1", 00:12:13.140 "trsvcid": "58438" 00:12:13.140 }, 00:12:13.140 "auth": { 00:12:13.140 "state": "completed", 00:12:13.140 "digest": "sha256", 00:12:13.140 "dhgroup": "ffdhe6144" 00:12:13.140 } 00:12:13.140 } 00:12:13.140 ]' 00:12:13.140 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:13.140 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:13.140 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:13.140 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:13.140 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:13.140 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.140 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.140 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.398 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:12:13.398 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:12:14.333 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.333 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:14.333 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.333 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.333 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.333 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:14.333 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:14.333 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:14.333 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:12:14.333 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:14.333 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:14.333 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:14.333 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:14.333 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.333 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key3 00:12:14.333 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.333 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.333 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.333 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:14.333 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:14.333 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:14.900 00:12:14.900 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.900 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.900 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.159 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.159 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.159 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.159 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.159 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.159 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:15.159 { 00:12:15.159 "cntlid": 39, 00:12:15.159 "qid": 0, 00:12:15.159 "state": "enabled", 00:12:15.159 "thread": "nvmf_tgt_poll_group_000", 00:12:15.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:15.159 "listen_address": { 00:12:15.159 "trtype": "TCP", 00:12:15.159 "adrfam": "IPv4", 00:12:15.159 "traddr": "10.0.0.3", 00:12:15.159 "trsvcid": "4420" 00:12:15.159 }, 00:12:15.159 "peer_address": { 00:12:15.159 "trtype": "TCP", 00:12:15.159 "adrfam": "IPv4", 00:12:15.159 "traddr": "10.0.0.1", 00:12:15.159 "trsvcid": "58462" 00:12:15.159 }, 00:12:15.159 "auth": { 00:12:15.159 "state": "completed", 00:12:15.159 "digest": "sha256", 00:12:15.159 "dhgroup": "ffdhe6144" 00:12:15.159 } 00:12:15.159 } 00:12:15.159 ]' 00:12:15.159 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:15.159 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:15.159 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:15.419 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:15.419 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:15.419 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.419 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.419 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.677 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:12:15.677 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:12:16.244 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.244 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:16.244 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.244 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.244 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.244 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:16.244 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:16.244 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:16.244 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:16.503 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:12:16.503 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:16.503 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:16.503 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:16.503 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:16.503 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.503 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.503 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.503 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.503 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.503 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.503 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.503 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.071 00:12:17.330 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:17.330 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:17.330 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.588 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.588 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.588 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.588 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.588 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.588 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:17.588 { 00:12:17.588 "cntlid": 41, 00:12:17.588 "qid": 0, 00:12:17.588 "state": "enabled", 00:12:17.588 "thread": "nvmf_tgt_poll_group_000", 00:12:17.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:17.588 "listen_address": { 00:12:17.588 "trtype": "TCP", 00:12:17.588 "adrfam": "IPv4", 00:12:17.588 "traddr": "10.0.0.3", 00:12:17.588 "trsvcid": "4420" 00:12:17.588 }, 00:12:17.588 "peer_address": { 00:12:17.588 "trtype": "TCP", 00:12:17.588 "adrfam": "IPv4", 00:12:17.588 "traddr": "10.0.0.1", 00:12:17.588 "trsvcid": "58480" 00:12:17.588 }, 00:12:17.588 "auth": { 00:12:17.588 "state": "completed", 00:12:17.588 "digest": "sha256", 00:12:17.588 "dhgroup": "ffdhe8192" 00:12:17.588 } 00:12:17.588 } 00:12:17.588 ]' 00:12:17.588 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:17.588 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:17.588 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:17.588 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:17.588 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:17.588 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.588 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.588 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.847 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:12:17.847 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:12:18.782 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.782 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:18.782 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.782 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.782 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.782 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:18.782 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:18.782 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:19.041 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:12:19.041 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:19.041 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:19.041 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:19.041 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:19.041 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.041 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.041 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.041 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.041 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.041 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.041 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.041 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.609 00:12:19.609 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:19.609 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.609 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:19.868 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.868 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.868 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.868 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.868 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.868 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:19.868 { 00:12:19.868 "cntlid": 43, 00:12:19.868 "qid": 0, 00:12:19.868 "state": "enabled", 00:12:19.868 "thread": "nvmf_tgt_poll_group_000", 00:12:19.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:19.868 "listen_address": { 00:12:19.868 "trtype": "TCP", 00:12:19.868 "adrfam": "IPv4", 00:12:19.868 "traddr": "10.0.0.3", 00:12:19.868 "trsvcid": "4420" 00:12:19.868 }, 00:12:19.868 "peer_address": { 00:12:19.868 "trtype": "TCP", 00:12:19.868 "adrfam": "IPv4", 00:12:19.868 "traddr": "10.0.0.1", 00:12:19.868 "trsvcid": "51788" 00:12:19.868 }, 00:12:19.868 "auth": { 00:12:19.868 "state": "completed", 00:12:19.868 "digest": "sha256", 00:12:19.868 "dhgroup": "ffdhe8192" 00:12:19.868 } 00:12:19.868 } 00:12:19.868 ]' 00:12:19.868 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.127 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:20.127 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:20.127 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:20.127 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:20.127 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.127 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.127 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.386 10:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:12:20.386 10:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:12:21.323 10:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.323 10:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:21.323 10:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.323 10:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.323 10:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.323 10:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:21.323 10:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:21.323 10:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:21.582 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:12:21.582 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:21.582 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:21.582 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:21.582 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:21.582 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.582 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.582 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.582 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.582 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.582 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.582 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.582 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.150 00:12:22.150 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:22.150 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:22.150 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.408 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.408 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.408 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.408 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.408 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.408 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:22.408 { 00:12:22.408 "cntlid": 45, 00:12:22.408 "qid": 0, 00:12:22.408 "state": "enabled", 00:12:22.408 "thread": "nvmf_tgt_poll_group_000", 00:12:22.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:22.408 "listen_address": { 00:12:22.408 "trtype": "TCP", 00:12:22.408 "adrfam": "IPv4", 00:12:22.408 "traddr": "10.0.0.3", 00:12:22.408 "trsvcid": "4420" 00:12:22.408 }, 00:12:22.408 "peer_address": { 00:12:22.408 "trtype": "TCP", 00:12:22.408 "adrfam": "IPv4", 00:12:22.408 "traddr": "10.0.0.1", 00:12:22.408 "trsvcid": "51812" 00:12:22.408 }, 00:12:22.408 "auth": { 00:12:22.408 "state": "completed", 00:12:22.408 "digest": "sha256", 00:12:22.408 "dhgroup": "ffdhe8192" 00:12:22.408 } 00:12:22.408 } 00:12:22.408 ]' 00:12:22.667 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:22.667 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:22.667 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:22.667 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:22.667 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:22.667 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.667 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.668 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.927 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:12:22.927 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:12:23.864 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.864 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:23.864 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.864 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.864 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.864 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:23.864 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:23.864 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:24.124 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:12:24.124 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:24.124 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:24.124 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:24.124 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:24.124 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.124 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key3 00:12:24.124 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.124 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.124 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.124 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:24.124 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:24.124 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:24.692 00:12:24.692 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:24.692 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.692 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:24.956 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.956 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.956 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.956 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.956 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.956 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:24.956 { 00:12:24.956 "cntlid": 47, 00:12:24.956 "qid": 0, 00:12:24.956 "state": "enabled", 00:12:24.956 "thread": "nvmf_tgt_poll_group_000", 00:12:24.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:24.956 "listen_address": { 00:12:24.956 "trtype": "TCP", 00:12:24.956 "adrfam": "IPv4", 00:12:24.956 "traddr": "10.0.0.3", 00:12:24.956 "trsvcid": "4420" 00:12:24.956 }, 00:12:24.956 "peer_address": { 00:12:24.956 "trtype": "TCP", 00:12:24.956 "adrfam": "IPv4", 00:12:24.956 "traddr": "10.0.0.1", 00:12:24.956 "trsvcid": "51842" 00:12:24.956 }, 00:12:24.956 "auth": { 00:12:24.956 "state": "completed", 00:12:24.956 "digest": "sha256", 00:12:24.956 "dhgroup": "ffdhe8192" 00:12:24.956 } 00:12:24.956 } 00:12:24.956 ]' 00:12:24.956 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.215 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:25.215 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.215 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:25.215 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:25.215 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.215 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.215 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.473 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:12:25.473 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:12:26.037 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.037 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:26.037 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.037 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.037 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.037 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:26.037 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:26.037 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:26.037 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:26.037 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:26.295 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:12:26.295 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:26.295 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:26.295 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:26.295 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:26.295 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.295 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.295 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.295 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.553 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.553 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.553 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.553 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.811 00:12:26.811 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:26.811 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:26.811 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.069 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.069 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.069 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.069 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.069 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.069 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:27.069 { 00:12:27.069 "cntlid": 49, 00:12:27.069 "qid": 0, 00:12:27.069 "state": "enabled", 00:12:27.069 "thread": "nvmf_tgt_poll_group_000", 00:12:27.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:27.069 "listen_address": { 00:12:27.069 "trtype": "TCP", 00:12:27.069 "adrfam": "IPv4", 00:12:27.069 "traddr": "10.0.0.3", 00:12:27.069 "trsvcid": "4420" 00:12:27.069 }, 00:12:27.069 "peer_address": { 00:12:27.069 "trtype": "TCP", 00:12:27.069 "adrfam": "IPv4", 00:12:27.069 "traddr": "10.0.0.1", 00:12:27.069 "trsvcid": "51878" 00:12:27.069 }, 00:12:27.069 "auth": { 00:12:27.069 "state": "completed", 00:12:27.069 "digest": "sha384", 00:12:27.069 "dhgroup": "null" 00:12:27.069 } 00:12:27.069 } 00:12:27.069 ]' 00:12:27.069 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:27.069 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:27.070 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:27.328 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:27.328 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:27.328 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.328 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.328 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.586 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:12:27.586 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:12:28.152 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.152 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:28.152 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.152 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.152 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.152 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.152 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:28.152 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:28.483 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:12:28.483 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.483 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:28.483 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:28.483 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:28.483 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.483 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.483 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.483 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.483 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.483 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.483 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.483 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.050 00:12:29.050 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:29.050 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:29.050 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.309 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.309 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.309 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.309 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.309 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.309 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:29.309 { 00:12:29.309 "cntlid": 51, 00:12:29.309 "qid": 0, 00:12:29.309 "state": "enabled", 00:12:29.309 "thread": "nvmf_tgt_poll_group_000", 00:12:29.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:29.309 "listen_address": { 00:12:29.309 "trtype": "TCP", 00:12:29.309 "adrfam": "IPv4", 00:12:29.309 "traddr": "10.0.0.3", 00:12:29.309 "trsvcid": "4420" 00:12:29.309 }, 00:12:29.309 "peer_address": { 00:12:29.309 "trtype": "TCP", 00:12:29.309 "adrfam": "IPv4", 00:12:29.309 "traddr": "10.0.0.1", 00:12:29.309 "trsvcid": "51902" 00:12:29.309 }, 00:12:29.309 "auth": { 00:12:29.309 "state": "completed", 00:12:29.309 "digest": "sha384", 00:12:29.309 "dhgroup": "null" 00:12:29.309 } 00:12:29.309 } 00:12:29.310 ]' 00:12:29.310 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.310 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:29.310 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.310 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:29.310 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.310 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.310 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.310 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.569 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:12:29.569 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:12:30.505 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.505 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:30.505 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.505 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.505 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.505 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:30.505 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:30.505 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:30.765 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:12:30.765 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.765 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:30.765 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:30.765 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:30.765 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.765 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.765 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.765 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.765 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.765 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.765 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.765 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.024 00:12:31.024 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:31.024 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:31.024 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.283 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.283 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.283 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.283 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.283 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.283 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:31.283 { 00:12:31.283 "cntlid": 53, 00:12:31.283 "qid": 0, 00:12:31.283 "state": "enabled", 00:12:31.283 "thread": "nvmf_tgt_poll_group_000", 00:12:31.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:31.283 "listen_address": { 00:12:31.283 "trtype": "TCP", 00:12:31.283 "adrfam": "IPv4", 00:12:31.283 "traddr": "10.0.0.3", 00:12:31.283 "trsvcid": "4420" 00:12:31.283 }, 00:12:31.283 "peer_address": { 00:12:31.283 "trtype": "TCP", 00:12:31.283 "adrfam": "IPv4", 00:12:31.283 "traddr": "10.0.0.1", 00:12:31.283 "trsvcid": "51920" 00:12:31.283 }, 00:12:31.283 "auth": { 00:12:31.283 "state": "completed", 00:12:31.283 "digest": "sha384", 00:12:31.283 "dhgroup": "null" 00:12:31.283 } 00:12:31.283 } 00:12:31.283 ]' 00:12:31.283 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:31.283 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:31.283 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.283 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:31.283 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.543 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.543 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.543 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.803 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:12:31.803 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:12:32.370 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.370 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:32.370 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.370 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.370 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.370 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:32.370 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:32.370 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:32.629 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:12:32.629 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.629 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:32.629 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:32.629 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:32.629 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.629 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key3 00:12:32.629 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.629 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.629 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.630 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:32.630 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:32.630 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:33.198 00:12:33.198 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.198 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.198 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.456 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.457 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.457 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.457 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.457 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.457 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.457 { 00:12:33.457 "cntlid": 55, 00:12:33.457 "qid": 0, 00:12:33.457 "state": "enabled", 00:12:33.457 "thread": "nvmf_tgt_poll_group_000", 00:12:33.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:33.457 "listen_address": { 00:12:33.457 "trtype": "TCP", 00:12:33.457 "adrfam": "IPv4", 00:12:33.457 "traddr": "10.0.0.3", 00:12:33.457 "trsvcid": "4420" 00:12:33.457 }, 00:12:33.457 "peer_address": { 00:12:33.457 "trtype": "TCP", 00:12:33.457 "adrfam": "IPv4", 00:12:33.457 "traddr": "10.0.0.1", 00:12:33.457 "trsvcid": "51942" 00:12:33.457 }, 00:12:33.457 "auth": { 00:12:33.457 "state": "completed", 00:12:33.457 "digest": "sha384", 00:12:33.457 "dhgroup": "null" 00:12:33.457 } 00:12:33.457 } 00:12:33.457 ]' 00:12:33.457 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:33.457 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:33.457 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:33.457 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:33.457 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:33.715 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.715 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.715 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.081 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:12:34.082 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:12:34.649 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.649 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:34.649 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.649 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.649 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.649 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:34.649 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.649 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:34.649 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:34.908 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:12:34.908 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:34.908 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:34.908 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:34.908 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:34.908 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.908 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.908 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.908 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.908 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.908 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.908 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.908 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.476 00:12:35.476 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.476 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.476 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.738 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.738 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.738 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.738 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.738 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.738 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.738 { 00:12:35.738 "cntlid": 57, 00:12:35.738 "qid": 0, 00:12:35.738 "state": "enabled", 00:12:35.738 "thread": "nvmf_tgt_poll_group_000", 00:12:35.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:35.738 "listen_address": { 00:12:35.738 "trtype": "TCP", 00:12:35.738 "adrfam": "IPv4", 00:12:35.738 "traddr": "10.0.0.3", 00:12:35.738 "trsvcid": "4420" 00:12:35.738 }, 00:12:35.738 "peer_address": { 00:12:35.738 "trtype": "TCP", 00:12:35.738 "adrfam": "IPv4", 00:12:35.738 "traddr": "10.0.0.1", 00:12:35.738 "trsvcid": "51980" 00:12:35.738 }, 00:12:35.738 "auth": { 00:12:35.738 "state": "completed", 00:12:35.738 "digest": "sha384", 00:12:35.738 "dhgroup": "ffdhe2048" 00:12:35.738 } 00:12:35.738 } 00:12:35.738 ]' 00:12:35.738 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.738 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:35.738 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:35.997 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:35.997 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:35.997 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.997 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.997 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.256 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:12:36.256 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:12:37.191 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.191 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:37.191 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.191 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.191 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.191 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.191 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:37.191 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:37.450 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:12:37.450 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.450 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:37.450 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:37.450 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:37.450 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.450 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.450 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.450 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.450 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.450 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.450 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.450 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.708 00:12:37.708 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.708 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:37.708 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.967 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.967 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.967 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.967 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.967 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.967 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:37.967 { 00:12:37.967 "cntlid": 59, 00:12:37.967 "qid": 0, 00:12:37.967 "state": "enabled", 00:12:37.967 "thread": "nvmf_tgt_poll_group_000", 00:12:37.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:37.967 "listen_address": { 00:12:37.967 "trtype": "TCP", 00:12:37.967 "adrfam": "IPv4", 00:12:37.967 "traddr": "10.0.0.3", 00:12:37.967 "trsvcid": "4420" 00:12:37.967 }, 00:12:37.967 "peer_address": { 00:12:37.967 "trtype": "TCP", 00:12:37.967 "adrfam": "IPv4", 00:12:37.967 "traddr": "10.0.0.1", 00:12:37.967 "trsvcid": "52008" 00:12:37.967 }, 00:12:37.967 "auth": { 00:12:37.967 "state": "completed", 00:12:37.967 "digest": "sha384", 00:12:37.967 "dhgroup": "ffdhe2048" 00:12:37.967 } 00:12:37.967 } 00:12:37.967 ]' 00:12:37.967 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:37.967 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:37.967 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.225 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:38.225 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.225 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.225 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.225 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.484 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:12:38.484 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:12:39.423 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.423 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:39.423 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.423 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.423 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.423 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:39.423 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:39.424 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:39.424 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:12:39.424 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.424 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:39.424 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:39.424 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:39.424 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.424 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.424 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.424 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.685 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.685 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.685 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.685 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.944 00:12:39.944 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:39.944 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:39.944 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.202 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.202 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.202 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.202 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.202 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.202 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.202 { 00:12:40.202 "cntlid": 61, 00:12:40.202 "qid": 0, 00:12:40.202 "state": "enabled", 00:12:40.202 "thread": "nvmf_tgt_poll_group_000", 00:12:40.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:40.202 "listen_address": { 00:12:40.202 "trtype": "TCP", 00:12:40.202 "adrfam": "IPv4", 00:12:40.202 "traddr": "10.0.0.3", 00:12:40.202 "trsvcid": "4420" 00:12:40.202 }, 00:12:40.202 "peer_address": { 00:12:40.202 "trtype": "TCP", 00:12:40.202 "adrfam": "IPv4", 00:12:40.202 "traddr": "10.0.0.1", 00:12:40.202 "trsvcid": "43452" 00:12:40.202 }, 00:12:40.202 "auth": { 00:12:40.202 "state": "completed", 00:12:40.202 "digest": "sha384", 00:12:40.203 "dhgroup": "ffdhe2048" 00:12:40.203 } 00:12:40.203 } 00:12:40.203 ]' 00:12:40.203 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.203 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:40.203 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.461 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:40.461 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.461 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.461 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.461 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.719 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:12:40.719 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:12:41.655 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.655 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:41.655 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.655 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.655 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.655 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:41.655 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:41.655 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:41.655 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:12:41.655 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:41.655 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:41.655 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:41.655 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:41.655 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.655 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key3 00:12:41.655 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.655 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.914 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.914 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:41.914 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:41.914 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:42.173 00:12:42.173 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.173 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:42.173 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.765 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.765 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.765 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.765 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.765 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.765 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.765 { 00:12:42.765 "cntlid": 63, 00:12:42.765 "qid": 0, 00:12:42.765 "state": "enabled", 00:12:42.765 "thread": "nvmf_tgt_poll_group_000", 00:12:42.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:42.765 "listen_address": { 00:12:42.765 "trtype": "TCP", 00:12:42.765 "adrfam": "IPv4", 00:12:42.765 "traddr": "10.0.0.3", 00:12:42.765 "trsvcid": "4420" 00:12:42.765 }, 00:12:42.765 "peer_address": { 00:12:42.765 "trtype": "TCP", 00:12:42.765 "adrfam": "IPv4", 00:12:42.765 "traddr": "10.0.0.1", 00:12:42.765 "trsvcid": "43494" 00:12:42.765 }, 00:12:42.765 "auth": { 00:12:42.765 "state": "completed", 00:12:42.765 "digest": "sha384", 00:12:42.765 "dhgroup": "ffdhe2048" 00:12:42.765 } 00:12:42.765 } 00:12:42.765 ]' 00:12:42.766 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.766 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:42.766 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.766 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:42.766 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:42.766 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.766 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.766 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.024 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:12:43.024 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:12:43.592 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.592 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:43.592 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.592 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.592 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.592 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:43.592 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:43.592 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:43.592 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:43.851 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:12:43.851 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:43.851 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:43.851 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:43.851 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:43.851 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.851 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.851 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.851 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.851 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.851 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.852 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.852 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.419 00:12:44.419 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.419 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.419 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.679 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.679 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.679 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.679 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.679 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.679 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.679 { 00:12:44.679 "cntlid": 65, 00:12:44.679 "qid": 0, 00:12:44.679 "state": "enabled", 00:12:44.679 "thread": "nvmf_tgt_poll_group_000", 00:12:44.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:44.679 "listen_address": { 00:12:44.679 "trtype": "TCP", 00:12:44.679 "adrfam": "IPv4", 00:12:44.679 "traddr": "10.0.0.3", 00:12:44.679 "trsvcid": "4420" 00:12:44.679 }, 00:12:44.679 "peer_address": { 00:12:44.679 "trtype": "TCP", 00:12:44.679 "adrfam": "IPv4", 00:12:44.679 "traddr": "10.0.0.1", 00:12:44.679 "trsvcid": "43526" 00:12:44.679 }, 00:12:44.679 "auth": { 00:12:44.679 "state": "completed", 00:12:44.679 "digest": "sha384", 00:12:44.679 "dhgroup": "ffdhe3072" 00:12:44.679 } 00:12:44.679 } 00:12:44.679 ]' 00:12:44.679 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:44.679 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:44.679 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:44.679 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:44.679 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:44.679 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.679 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.679 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.247 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:12:45.247 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:12:45.813 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.813 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:45.813 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.813 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.813 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.813 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:45.813 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:45.813 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:46.072 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:12:46.072 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:46.072 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:46.072 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:46.072 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:46.072 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.072 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.072 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.072 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.072 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.072 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.072 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.072 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.332 00:12:46.332 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.332 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.332 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.963 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.963 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.963 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.963 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.963 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.963 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.963 { 00:12:46.963 "cntlid": 67, 00:12:46.963 "qid": 0, 00:12:46.963 "state": "enabled", 00:12:46.963 "thread": "nvmf_tgt_poll_group_000", 00:12:46.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:46.963 "listen_address": { 00:12:46.963 "trtype": "TCP", 00:12:46.963 "adrfam": "IPv4", 00:12:46.963 "traddr": "10.0.0.3", 00:12:46.963 "trsvcid": "4420" 00:12:46.963 }, 00:12:46.963 "peer_address": { 00:12:46.963 "trtype": "TCP", 00:12:46.963 "adrfam": "IPv4", 00:12:46.963 "traddr": "10.0.0.1", 00:12:46.963 "trsvcid": "43550" 00:12:46.963 }, 00:12:46.963 "auth": { 00:12:46.963 "state": "completed", 00:12:46.963 "digest": "sha384", 00:12:46.963 "dhgroup": "ffdhe3072" 00:12:46.963 } 00:12:46.963 } 00:12:46.963 ]' 00:12:46.963 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.963 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:46.963 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:46.963 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:46.963 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.963 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.963 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.963 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.223 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:12:47.223 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:12:48.161 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.161 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:48.161 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.161 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.161 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.161 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.161 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:48.161 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:48.161 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:12:48.161 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.161 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:48.161 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:48.161 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:48.161 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.161 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.161 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.161 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.161 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.161 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.161 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.161 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.729 00:12:48.729 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.729 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:48.729 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.988 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.988 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.988 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.988 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.988 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.989 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:48.989 { 00:12:48.989 "cntlid": 69, 00:12:48.989 "qid": 0, 00:12:48.989 "state": "enabled", 00:12:48.989 "thread": "nvmf_tgt_poll_group_000", 00:12:48.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:48.989 "listen_address": { 00:12:48.989 "trtype": "TCP", 00:12:48.989 "adrfam": "IPv4", 00:12:48.989 "traddr": "10.0.0.3", 00:12:48.989 "trsvcid": "4420" 00:12:48.989 }, 00:12:48.989 "peer_address": { 00:12:48.989 "trtype": "TCP", 00:12:48.989 "adrfam": "IPv4", 00:12:48.989 "traddr": "10.0.0.1", 00:12:48.989 "trsvcid": "37984" 00:12:48.989 }, 00:12:48.989 "auth": { 00:12:48.989 "state": "completed", 00:12:48.989 "digest": "sha384", 00:12:48.989 "dhgroup": "ffdhe3072" 00:12:48.989 } 00:12:48.989 } 00:12:48.989 ]' 00:12:48.989 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:48.989 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:48.989 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:48.989 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:48.989 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:48.989 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.989 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.989 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.248 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:12:49.248 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:12:50.187 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.187 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:50.187 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.187 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.187 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.187 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.187 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:50.187 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:50.447 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:12:50.447 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.447 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:50.447 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:50.447 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:50.447 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.447 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key3 00:12:50.447 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.447 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.447 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.447 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:50.447 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:50.447 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:50.708 00:12:50.708 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.708 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.708 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:50.967 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.967 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.967 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.967 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.967 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.967 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:50.967 { 00:12:50.967 "cntlid": 71, 00:12:50.967 "qid": 0, 00:12:50.967 "state": "enabled", 00:12:50.967 "thread": "nvmf_tgt_poll_group_000", 00:12:50.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:50.967 "listen_address": { 00:12:50.967 "trtype": "TCP", 00:12:50.967 "adrfam": "IPv4", 00:12:50.967 "traddr": "10.0.0.3", 00:12:50.967 "trsvcid": "4420" 00:12:50.967 }, 00:12:50.967 "peer_address": { 00:12:50.967 "trtype": "TCP", 00:12:50.967 "adrfam": "IPv4", 00:12:50.967 "traddr": "10.0.0.1", 00:12:50.967 "trsvcid": "38022" 00:12:50.967 }, 00:12:50.967 "auth": { 00:12:50.967 "state": "completed", 00:12:50.967 "digest": "sha384", 00:12:50.967 "dhgroup": "ffdhe3072" 00:12:50.967 } 00:12:50.967 } 00:12:50.967 ]' 00:12:50.967 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:50.967 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:50.967 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:50.967 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:50.967 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.226 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.226 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.226 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.485 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:12:51.485 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:12:52.053 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.053 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:52.053 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.053 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.053 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.053 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:52.053 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.053 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:52.053 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:52.312 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:12:52.312 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.312 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:52.312 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:52.312 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:52.312 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.312 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.312 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.312 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.570 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.570 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.570 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.570 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.829 00:12:52.829 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:52.829 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:52.829 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.088 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.088 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.088 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.088 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.346 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.346 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.346 { 00:12:53.346 "cntlid": 73, 00:12:53.346 "qid": 0, 00:12:53.346 "state": "enabled", 00:12:53.346 "thread": "nvmf_tgt_poll_group_000", 00:12:53.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:53.346 "listen_address": { 00:12:53.346 "trtype": "TCP", 00:12:53.346 "adrfam": "IPv4", 00:12:53.346 "traddr": "10.0.0.3", 00:12:53.346 "trsvcid": "4420" 00:12:53.346 }, 00:12:53.346 "peer_address": { 00:12:53.346 "trtype": "TCP", 00:12:53.346 "adrfam": "IPv4", 00:12:53.347 "traddr": "10.0.0.1", 00:12:53.347 "trsvcid": "38048" 00:12:53.347 }, 00:12:53.347 "auth": { 00:12:53.347 "state": "completed", 00:12:53.347 "digest": "sha384", 00:12:53.347 "dhgroup": "ffdhe4096" 00:12:53.347 } 00:12:53.347 } 00:12:53.347 ]' 00:12:53.347 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.347 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:53.347 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.347 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:53.347 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.347 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.347 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.347 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.605 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:12:53.605 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:12:54.539 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.539 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:54.539 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.539 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.539 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.539 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.539 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:54.539 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:54.797 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:12:54.797 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.797 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:54.797 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:54.797 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:54.797 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.797 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.797 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.797 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.797 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.797 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.797 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.797 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.056 00:12:55.056 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:55.056 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.056 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.623 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.623 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.623 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.623 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.623 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.623 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.623 { 00:12:55.623 "cntlid": 75, 00:12:55.623 "qid": 0, 00:12:55.623 "state": "enabled", 00:12:55.623 "thread": "nvmf_tgt_poll_group_000", 00:12:55.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:55.623 "listen_address": { 00:12:55.623 "trtype": "TCP", 00:12:55.623 "adrfam": "IPv4", 00:12:55.623 "traddr": "10.0.0.3", 00:12:55.623 "trsvcid": "4420" 00:12:55.623 }, 00:12:55.623 "peer_address": { 00:12:55.623 "trtype": "TCP", 00:12:55.623 "adrfam": "IPv4", 00:12:55.623 "traddr": "10.0.0.1", 00:12:55.623 "trsvcid": "38078" 00:12:55.623 }, 00:12:55.623 "auth": { 00:12:55.623 "state": "completed", 00:12:55.623 "digest": "sha384", 00:12:55.623 "dhgroup": "ffdhe4096" 00:12:55.623 } 00:12:55.623 } 00:12:55.623 ]' 00:12:55.623 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.623 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:55.623 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.623 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:55.623 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.623 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.623 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.623 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.191 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:12:56.191 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:12:56.757 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.757 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:56.757 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.758 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.758 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.758 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:56.758 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:56.758 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:57.015 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:12:57.015 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:57.015 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:57.015 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:57.015 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:57.015 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.015 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.015 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.015 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.015 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.015 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.015 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.015 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.582 00:12:57.582 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:57.582 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:57.582 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.842 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.842 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.842 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.842 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.842 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.842 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.842 { 00:12:57.842 "cntlid": 77, 00:12:57.842 "qid": 0, 00:12:57.842 "state": "enabled", 00:12:57.842 "thread": "nvmf_tgt_poll_group_000", 00:12:57.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:12:57.842 "listen_address": { 00:12:57.842 "trtype": "TCP", 00:12:57.842 "adrfam": "IPv4", 00:12:57.842 "traddr": "10.0.0.3", 00:12:57.842 "trsvcid": "4420" 00:12:57.842 }, 00:12:57.842 "peer_address": { 00:12:57.842 "trtype": "TCP", 00:12:57.842 "adrfam": "IPv4", 00:12:57.842 "traddr": "10.0.0.1", 00:12:57.842 "trsvcid": "38098" 00:12:57.842 }, 00:12:57.842 "auth": { 00:12:57.842 "state": "completed", 00:12:57.842 "digest": "sha384", 00:12:57.842 "dhgroup": "ffdhe4096" 00:12:57.842 } 00:12:57.842 } 00:12:57.842 ]' 00:12:57.842 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.842 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:57.842 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.842 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:57.842 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:58.101 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.101 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.101 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.361 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:12:58.361 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:12:59.298 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.298 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:12:59.298 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.298 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.298 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.298 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:59.298 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:59.298 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:59.557 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:12:59.557 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:59.557 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:59.557 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:59.557 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:59.557 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.557 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key3 00:12:59.557 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.557 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.557 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.557 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:59.557 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:59.557 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:59.816 00:12:59.816 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:59.816 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.816 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.075 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.075 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.075 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.075 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.334 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.334 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.334 { 00:13:00.334 "cntlid": 79, 00:13:00.334 "qid": 0, 00:13:00.334 "state": "enabled", 00:13:00.334 "thread": "nvmf_tgt_poll_group_000", 00:13:00.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:00.334 "listen_address": { 00:13:00.334 "trtype": "TCP", 00:13:00.334 "adrfam": "IPv4", 00:13:00.334 "traddr": "10.0.0.3", 00:13:00.334 "trsvcid": "4420" 00:13:00.334 }, 00:13:00.334 "peer_address": { 00:13:00.334 "trtype": "TCP", 00:13:00.334 "adrfam": "IPv4", 00:13:00.334 "traddr": "10.0.0.1", 00:13:00.334 "trsvcid": "34708" 00:13:00.334 }, 00:13:00.334 "auth": { 00:13:00.334 "state": "completed", 00:13:00.334 "digest": "sha384", 00:13:00.334 "dhgroup": "ffdhe4096" 00:13:00.334 } 00:13:00.334 } 00:13:00.334 ]' 00:13:00.334 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.334 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:00.334 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.334 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:00.334 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:00.334 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.334 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.334 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.593 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:13:00.593 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:13:01.161 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.161 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:01.161 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.161 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.161 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.161 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:01.161 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.161 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:01.161 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:01.729 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:13:01.729 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:01.729 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:01.729 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:01.729 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:01.729 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.729 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.729 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.729 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.729 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.729 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.729 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.729 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.988 00:13:01.988 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:01.988 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:01.988 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.246 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.246 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.246 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.246 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.506 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.506 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.506 { 00:13:02.506 "cntlid": 81, 00:13:02.506 "qid": 0, 00:13:02.506 "state": "enabled", 00:13:02.506 "thread": "nvmf_tgt_poll_group_000", 00:13:02.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:02.506 "listen_address": { 00:13:02.506 "trtype": "TCP", 00:13:02.506 "adrfam": "IPv4", 00:13:02.506 "traddr": "10.0.0.3", 00:13:02.506 "trsvcid": "4420" 00:13:02.506 }, 00:13:02.506 "peer_address": { 00:13:02.506 "trtype": "TCP", 00:13:02.506 "adrfam": "IPv4", 00:13:02.506 "traddr": "10.0.0.1", 00:13:02.506 "trsvcid": "34738" 00:13:02.506 }, 00:13:02.506 "auth": { 00:13:02.506 "state": "completed", 00:13:02.506 "digest": "sha384", 00:13:02.506 "dhgroup": "ffdhe6144" 00:13:02.506 } 00:13:02.506 } 00:13:02.506 ]' 00:13:02.506 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.506 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:02.506 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.506 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:02.506 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.506 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.506 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.506 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.074 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:13:03.074 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:13:03.642 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.642 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:03.642 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.642 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.642 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.642 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.642 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:03.642 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:03.966 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:13:03.966 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.966 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:03.966 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:03.966 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:03.966 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.966 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.966 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.966 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.966 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.966 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.966 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.966 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.238 00:13:04.497 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.497 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.497 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.755 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.755 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.755 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.755 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.755 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.755 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.755 { 00:13:04.755 "cntlid": 83, 00:13:04.755 "qid": 0, 00:13:04.755 "state": "enabled", 00:13:04.755 "thread": "nvmf_tgt_poll_group_000", 00:13:04.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:04.755 "listen_address": { 00:13:04.755 "trtype": "TCP", 00:13:04.755 "adrfam": "IPv4", 00:13:04.755 "traddr": "10.0.0.3", 00:13:04.755 "trsvcid": "4420" 00:13:04.755 }, 00:13:04.755 "peer_address": { 00:13:04.755 "trtype": "TCP", 00:13:04.755 "adrfam": "IPv4", 00:13:04.755 "traddr": "10.0.0.1", 00:13:04.755 "trsvcid": "34766" 00:13:04.755 }, 00:13:04.755 "auth": { 00:13:04.755 "state": "completed", 00:13:04.755 "digest": "sha384", 00:13:04.755 "dhgroup": "ffdhe6144" 00:13:04.755 } 00:13:04.755 } 00:13:04.755 ]' 00:13:04.755 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.755 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:04.755 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.755 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:04.755 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.755 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.755 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.755 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.014 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:13:05.014 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:13:05.949 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.949 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:05.949 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.949 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.949 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.949 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.949 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:05.949 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:06.208 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:13:06.208 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:06.208 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:06.208 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:06.208 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:06.208 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.208 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.208 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.208 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.208 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.208 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.208 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.208 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.466 00:13:06.466 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.466 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.466 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.034 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.034 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.034 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.034 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.034 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.034 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:07.034 { 00:13:07.034 "cntlid": 85, 00:13:07.034 "qid": 0, 00:13:07.034 "state": "enabled", 00:13:07.034 "thread": "nvmf_tgt_poll_group_000", 00:13:07.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:07.034 "listen_address": { 00:13:07.034 "trtype": "TCP", 00:13:07.034 "adrfam": "IPv4", 00:13:07.034 "traddr": "10.0.0.3", 00:13:07.034 "trsvcid": "4420" 00:13:07.034 }, 00:13:07.034 "peer_address": { 00:13:07.034 "trtype": "TCP", 00:13:07.034 "adrfam": "IPv4", 00:13:07.034 "traddr": "10.0.0.1", 00:13:07.034 "trsvcid": "34788" 00:13:07.034 }, 00:13:07.034 "auth": { 00:13:07.034 "state": "completed", 00:13:07.034 "digest": "sha384", 00:13:07.034 "dhgroup": "ffdhe6144" 00:13:07.034 } 00:13:07.034 } 00:13:07.034 ]' 00:13:07.034 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:07.034 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:07.034 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:07.034 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:07.034 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:07.034 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.035 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.035 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.294 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:13:07.294 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:13:07.862 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.862 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:07.862 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.862 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.862 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.862 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.862 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:07.862 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:08.120 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:13:08.120 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.120 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:08.120 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:08.120 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:08.120 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.120 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key3 00:13:08.120 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.120 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.120 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.120 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:08.120 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:08.120 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:08.687 00:13:08.687 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.687 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.687 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.946 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.946 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.946 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.946 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.946 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.946 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:08.946 { 00:13:08.946 "cntlid": 87, 00:13:08.946 "qid": 0, 00:13:08.946 "state": "enabled", 00:13:08.946 "thread": "nvmf_tgt_poll_group_000", 00:13:08.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:08.946 "listen_address": { 00:13:08.946 "trtype": "TCP", 00:13:08.946 "adrfam": "IPv4", 00:13:08.946 "traddr": "10.0.0.3", 00:13:08.946 "trsvcid": "4420" 00:13:08.946 }, 00:13:08.946 "peer_address": { 00:13:08.946 "trtype": "TCP", 00:13:08.946 "adrfam": "IPv4", 00:13:08.946 "traddr": "10.0.0.1", 00:13:08.946 "trsvcid": "48786" 00:13:08.946 }, 00:13:08.946 "auth": { 00:13:08.946 "state": "completed", 00:13:08.946 "digest": "sha384", 00:13:08.946 "dhgroup": "ffdhe6144" 00:13:08.946 } 00:13:08.946 } 00:13:08.946 ]' 00:13:08.946 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.205 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:09.205 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.205 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:09.205 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.205 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.205 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.205 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.465 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:13:09.465 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:13:10.032 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.032 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:10.032 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.032 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.032 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.032 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:10.032 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.032 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:10.032 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:10.290 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:13:10.290 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.290 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:10.290 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:10.290 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:10.290 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.290 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.290 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.290 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.290 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.290 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.290 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.290 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.226 00:13:11.226 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:11.226 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.226 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.484 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.484 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.484 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.485 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.485 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.485 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.485 { 00:13:11.485 "cntlid": 89, 00:13:11.485 "qid": 0, 00:13:11.485 "state": "enabled", 00:13:11.485 "thread": "nvmf_tgt_poll_group_000", 00:13:11.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:11.485 "listen_address": { 00:13:11.485 "trtype": "TCP", 00:13:11.485 "adrfam": "IPv4", 00:13:11.485 "traddr": "10.0.0.3", 00:13:11.485 "trsvcid": "4420" 00:13:11.485 }, 00:13:11.485 "peer_address": { 00:13:11.485 "trtype": "TCP", 00:13:11.485 "adrfam": "IPv4", 00:13:11.485 "traddr": "10.0.0.1", 00:13:11.485 "trsvcid": "48804" 00:13:11.485 }, 00:13:11.485 "auth": { 00:13:11.485 "state": "completed", 00:13:11.485 "digest": "sha384", 00:13:11.485 "dhgroup": "ffdhe8192" 00:13:11.485 } 00:13:11.485 } 00:13:11.485 ]' 00:13:11.485 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.485 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:11.485 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.485 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:11.485 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.485 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.485 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.485 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.744 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:13:11.744 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:13:12.678 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.679 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:12.679 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.679 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.679 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.679 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.679 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:12.679 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:12.937 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:13:12.937 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.937 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:12.937 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:12.937 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:12.937 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.937 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.937 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.937 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.937 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.937 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.937 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.937 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.503 00:13:13.503 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.503 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:13.503 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.763 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.763 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.763 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.763 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.763 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.763 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.763 { 00:13:13.763 "cntlid": 91, 00:13:13.763 "qid": 0, 00:13:13.763 "state": "enabled", 00:13:13.763 "thread": "nvmf_tgt_poll_group_000", 00:13:13.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:13.763 "listen_address": { 00:13:13.763 "trtype": "TCP", 00:13:13.763 "adrfam": "IPv4", 00:13:13.763 "traddr": "10.0.0.3", 00:13:13.763 "trsvcid": "4420" 00:13:13.763 }, 00:13:13.763 "peer_address": { 00:13:13.763 "trtype": "TCP", 00:13:13.763 "adrfam": "IPv4", 00:13:13.763 "traddr": "10.0.0.1", 00:13:13.763 "trsvcid": "48822" 00:13:13.763 }, 00:13:13.763 "auth": { 00:13:13.763 "state": "completed", 00:13:13.763 "digest": "sha384", 00:13:13.763 "dhgroup": "ffdhe8192" 00:13:13.763 } 00:13:13.763 } 00:13:13.763 ]' 00:13:13.763 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.763 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:13.763 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.763 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:14.027 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:14.027 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.027 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.027 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.285 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:13:14.285 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:13:14.850 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.850 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:14.850 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.850 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.108 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.108 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:15.108 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:15.108 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:15.108 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:13:15.108 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:15.108 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:15.108 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:15.108 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:15.108 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.108 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.108 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.108 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.366 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.366 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.366 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.366 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.930 00:13:15.930 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.930 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.930 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:16.187 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.187 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.187 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.187 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.187 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.187 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:16.187 { 00:13:16.187 "cntlid": 93, 00:13:16.187 "qid": 0, 00:13:16.187 "state": "enabled", 00:13:16.187 "thread": "nvmf_tgt_poll_group_000", 00:13:16.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:16.187 "listen_address": { 00:13:16.187 "trtype": "TCP", 00:13:16.187 "adrfam": "IPv4", 00:13:16.187 "traddr": "10.0.0.3", 00:13:16.187 "trsvcid": "4420" 00:13:16.187 }, 00:13:16.187 "peer_address": { 00:13:16.187 "trtype": "TCP", 00:13:16.187 "adrfam": "IPv4", 00:13:16.187 "traddr": "10.0.0.1", 00:13:16.187 "trsvcid": "48858" 00:13:16.187 }, 00:13:16.187 "auth": { 00:13:16.187 "state": "completed", 00:13:16.187 "digest": "sha384", 00:13:16.187 "dhgroup": "ffdhe8192" 00:13:16.187 } 00:13:16.187 } 00:13:16.187 ]' 00:13:16.187 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:16.187 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:16.187 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:16.187 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:16.187 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:16.445 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.445 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.445 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.704 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:13:16.704 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:13:17.271 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.271 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:17.271 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.271 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.271 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.271 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:17.271 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:17.271 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:17.610 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:13:17.610 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.610 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:17.610 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:17.610 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:17.610 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.610 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key3 00:13:17.610 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.610 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.610 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.610 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:17.610 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:17.610 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:18.179 00:13:18.179 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:18.179 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.179 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:18.749 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.749 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.749 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.749 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.749 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.749 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:18.749 { 00:13:18.749 "cntlid": 95, 00:13:18.749 "qid": 0, 00:13:18.749 "state": "enabled", 00:13:18.749 "thread": "nvmf_tgt_poll_group_000", 00:13:18.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:18.749 "listen_address": { 00:13:18.749 "trtype": "TCP", 00:13:18.749 "adrfam": "IPv4", 00:13:18.749 "traddr": "10.0.0.3", 00:13:18.749 "trsvcid": "4420" 00:13:18.749 }, 00:13:18.749 "peer_address": { 00:13:18.749 "trtype": "TCP", 00:13:18.749 "adrfam": "IPv4", 00:13:18.749 "traddr": "10.0.0.1", 00:13:18.749 "trsvcid": "40024" 00:13:18.749 }, 00:13:18.749 "auth": { 00:13:18.749 "state": "completed", 00:13:18.749 "digest": "sha384", 00:13:18.749 "dhgroup": "ffdhe8192" 00:13:18.749 } 00:13:18.749 } 00:13:18.749 ]' 00:13:18.749 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:18.749 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:18.749 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:18.749 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:18.749 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:18.749 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.749 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.749 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.008 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:13:19.008 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:13:19.576 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.576 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:19.576 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.576 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.835 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.835 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:19.836 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:19.836 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:19.836 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:19.836 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:20.094 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:13:20.095 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.095 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:20.095 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:20.095 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:20.095 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.095 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.095 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.095 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.095 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.095 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.095 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.095 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.353 00:13:20.353 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:20.353 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:20.353 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.612 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.612 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.612 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.612 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.612 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.612 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:20.612 { 00:13:20.612 "cntlid": 97, 00:13:20.612 "qid": 0, 00:13:20.612 "state": "enabled", 00:13:20.613 "thread": "nvmf_tgt_poll_group_000", 00:13:20.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:20.613 "listen_address": { 00:13:20.613 "trtype": "TCP", 00:13:20.613 "adrfam": "IPv4", 00:13:20.613 "traddr": "10.0.0.3", 00:13:20.613 "trsvcid": "4420" 00:13:20.613 }, 00:13:20.613 "peer_address": { 00:13:20.613 "trtype": "TCP", 00:13:20.613 "adrfam": "IPv4", 00:13:20.613 "traddr": "10.0.0.1", 00:13:20.613 "trsvcid": "40058" 00:13:20.613 }, 00:13:20.613 "auth": { 00:13:20.613 "state": "completed", 00:13:20.613 "digest": "sha512", 00:13:20.613 "dhgroup": "null" 00:13:20.613 } 00:13:20.613 } 00:13:20.613 ]' 00:13:20.613 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.613 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:20.613 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:20.872 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:20.872 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:20.872 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.872 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.872 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.132 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:13:21.132 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:13:21.699 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.699 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:21.699 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.699 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.699 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.699 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:21.699 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:21.699 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:22.039 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:13:22.039 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.039 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:22.039 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:22.039 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:22.039 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.039 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.039 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.039 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.039 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.039 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.039 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.039 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.298 00:13:22.298 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:22.298 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:22.298 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.556 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.556 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.556 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.556 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.816 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.816 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:22.816 { 00:13:22.816 "cntlid": 99, 00:13:22.816 "qid": 0, 00:13:22.816 "state": "enabled", 00:13:22.816 "thread": "nvmf_tgt_poll_group_000", 00:13:22.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:22.816 "listen_address": { 00:13:22.816 "trtype": "TCP", 00:13:22.816 "adrfam": "IPv4", 00:13:22.816 "traddr": "10.0.0.3", 00:13:22.816 "trsvcid": "4420" 00:13:22.816 }, 00:13:22.816 "peer_address": { 00:13:22.816 "trtype": "TCP", 00:13:22.816 "adrfam": "IPv4", 00:13:22.816 "traddr": "10.0.0.1", 00:13:22.816 "trsvcid": "40084" 00:13:22.816 }, 00:13:22.816 "auth": { 00:13:22.816 "state": "completed", 00:13:22.816 "digest": "sha512", 00:13:22.816 "dhgroup": "null" 00:13:22.816 } 00:13:22.816 } 00:13:22.816 ]' 00:13:22.816 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:22.816 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:22.816 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:22.816 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:22.816 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:22.816 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.816 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.816 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.074 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:13:23.074 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:13:24.009 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.009 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:24.009 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.009 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.009 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.009 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.009 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:24.009 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:24.009 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:13:24.009 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.009 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:24.009 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:24.009 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:24.009 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.009 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.009 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.009 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.009 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.009 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.009 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.009 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.578 00:13:24.578 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:24.578 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:24.578 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.835 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.835 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.835 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.835 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.835 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.835 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:24.835 { 00:13:24.835 "cntlid": 101, 00:13:24.835 "qid": 0, 00:13:24.835 "state": "enabled", 00:13:24.835 "thread": "nvmf_tgt_poll_group_000", 00:13:24.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:24.835 "listen_address": { 00:13:24.835 "trtype": "TCP", 00:13:24.835 "adrfam": "IPv4", 00:13:24.835 "traddr": "10.0.0.3", 00:13:24.835 "trsvcid": "4420" 00:13:24.835 }, 00:13:24.835 "peer_address": { 00:13:24.835 "trtype": "TCP", 00:13:24.835 "adrfam": "IPv4", 00:13:24.835 "traddr": "10.0.0.1", 00:13:24.835 "trsvcid": "40116" 00:13:24.835 }, 00:13:24.835 "auth": { 00:13:24.835 "state": "completed", 00:13:24.835 "digest": "sha512", 00:13:24.835 "dhgroup": "null" 00:13:24.835 } 00:13:24.835 } 00:13:24.835 ]' 00:13:24.835 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:24.835 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:24.835 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:24.835 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:24.835 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:24.835 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.835 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.835 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.401 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:13:25.401 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:13:25.968 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.968 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:25.968 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.968 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.968 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.968 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:25.968 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:25.968 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:26.331 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:13:26.331 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:26.331 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:26.331 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:26.331 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:26.331 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.331 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key3 00:13:26.331 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.331 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.331 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.331 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:26.331 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:26.331 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:26.589 00:13:26.589 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:26.589 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.589 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:26.847 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.847 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.847 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.847 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.847 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.847 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:26.847 { 00:13:26.847 "cntlid": 103, 00:13:26.847 "qid": 0, 00:13:26.847 "state": "enabled", 00:13:26.847 "thread": "nvmf_tgt_poll_group_000", 00:13:26.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:26.848 "listen_address": { 00:13:26.848 "trtype": "TCP", 00:13:26.848 "adrfam": "IPv4", 00:13:26.848 "traddr": "10.0.0.3", 00:13:26.848 "trsvcid": "4420" 00:13:26.848 }, 00:13:26.848 "peer_address": { 00:13:26.848 "trtype": "TCP", 00:13:26.848 "adrfam": "IPv4", 00:13:26.848 "traddr": "10.0.0.1", 00:13:26.848 "trsvcid": "40142" 00:13:26.848 }, 00:13:26.848 "auth": { 00:13:26.848 "state": "completed", 00:13:26.848 "digest": "sha512", 00:13:26.848 "dhgroup": "null" 00:13:26.848 } 00:13:26.848 } 00:13:26.848 ]' 00:13:26.848 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.848 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:26.848 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:27.107 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:27.107 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:27.107 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.107 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.107 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.365 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:13:27.365 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:13:27.932 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.932 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:27.932 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.932 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.932 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.932 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:27.932 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.932 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:27.932 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:28.192 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:13:28.192 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.192 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:28.192 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:28.192 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:28.192 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.192 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.192 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.192 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.192 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.192 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.192 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.192 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.760 00:13:28.760 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:28.760 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:28.760 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.018 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.018 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.018 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.018 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.018 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.018 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.018 { 00:13:29.018 "cntlid": 105, 00:13:29.018 "qid": 0, 00:13:29.018 "state": "enabled", 00:13:29.018 "thread": "nvmf_tgt_poll_group_000", 00:13:29.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:29.018 "listen_address": { 00:13:29.018 "trtype": "TCP", 00:13:29.018 "adrfam": "IPv4", 00:13:29.018 "traddr": "10.0.0.3", 00:13:29.018 "trsvcid": "4420" 00:13:29.018 }, 00:13:29.018 "peer_address": { 00:13:29.018 "trtype": "TCP", 00:13:29.018 "adrfam": "IPv4", 00:13:29.018 "traddr": "10.0.0.1", 00:13:29.018 "trsvcid": "59218" 00:13:29.018 }, 00:13:29.018 "auth": { 00:13:29.018 "state": "completed", 00:13:29.018 "digest": "sha512", 00:13:29.018 "dhgroup": "ffdhe2048" 00:13:29.018 } 00:13:29.018 } 00:13:29.018 ]' 00:13:29.018 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.018 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:29.019 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.019 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:29.019 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.019 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.019 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.019 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.276 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:13:29.277 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:13:30.211 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.211 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:30.211 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.211 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.211 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.211 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.211 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:30.211 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:30.470 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:13:30.470 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:30.470 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:30.470 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:30.470 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:30.470 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.470 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.470 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.470 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.470 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.470 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.471 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.471 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.783 00:13:30.783 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:30.783 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:30.783 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.042 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.042 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.042 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.042 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.042 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.042 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.042 { 00:13:31.042 "cntlid": 107, 00:13:31.042 "qid": 0, 00:13:31.042 "state": "enabled", 00:13:31.042 "thread": "nvmf_tgt_poll_group_000", 00:13:31.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:31.042 "listen_address": { 00:13:31.042 "trtype": "TCP", 00:13:31.042 "adrfam": "IPv4", 00:13:31.042 "traddr": "10.0.0.3", 00:13:31.042 "trsvcid": "4420" 00:13:31.042 }, 00:13:31.042 "peer_address": { 00:13:31.042 "trtype": "TCP", 00:13:31.042 "adrfam": "IPv4", 00:13:31.042 "traddr": "10.0.0.1", 00:13:31.042 "trsvcid": "59240" 00:13:31.042 }, 00:13:31.042 "auth": { 00:13:31.042 "state": "completed", 00:13:31.042 "digest": "sha512", 00:13:31.042 "dhgroup": "ffdhe2048" 00:13:31.042 } 00:13:31.042 } 00:13:31.042 ]' 00:13:31.042 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.042 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:31.042 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.302 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:31.302 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:31.302 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.302 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.302 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.561 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:13:31.561 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:13:32.497 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.497 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:32.497 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.497 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.497 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.497 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.497 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:32.497 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:32.755 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:13:32.755 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:32.755 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:32.755 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:32.755 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:32.755 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.755 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.755 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.755 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.755 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.755 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.755 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.755 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.014 00:13:33.014 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.014 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.014 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.272 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.272 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.272 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.272 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.272 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.272 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.272 { 00:13:33.272 "cntlid": 109, 00:13:33.272 "qid": 0, 00:13:33.272 "state": "enabled", 00:13:33.272 "thread": "nvmf_tgt_poll_group_000", 00:13:33.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:33.272 "listen_address": { 00:13:33.272 "trtype": "TCP", 00:13:33.272 "adrfam": "IPv4", 00:13:33.272 "traddr": "10.0.0.3", 00:13:33.272 "trsvcid": "4420" 00:13:33.272 }, 00:13:33.272 "peer_address": { 00:13:33.272 "trtype": "TCP", 00:13:33.272 "adrfam": "IPv4", 00:13:33.272 "traddr": "10.0.0.1", 00:13:33.272 "trsvcid": "59266" 00:13:33.272 }, 00:13:33.272 "auth": { 00:13:33.272 "state": "completed", 00:13:33.273 "digest": "sha512", 00:13:33.273 "dhgroup": "ffdhe2048" 00:13:33.273 } 00:13:33.273 } 00:13:33.273 ]' 00:13:33.273 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:33.273 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:33.273 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:33.531 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:33.531 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.531 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.531 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.531 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.789 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:13:33.789 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:13:34.724 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.724 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:34.724 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.724 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.724 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.724 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.724 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:34.724 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:34.983 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:13:34.983 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:34.984 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:34.984 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:34.984 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:34.984 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.984 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key3 00:13:34.984 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.984 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.984 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.984 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:34.984 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:34.984 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:35.306 00:13:35.306 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.306 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.306 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.564 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.564 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.564 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.564 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.564 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.564 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.564 { 00:13:35.564 "cntlid": 111, 00:13:35.564 "qid": 0, 00:13:35.564 "state": "enabled", 00:13:35.564 "thread": "nvmf_tgt_poll_group_000", 00:13:35.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:35.564 "listen_address": { 00:13:35.564 "trtype": "TCP", 00:13:35.564 "adrfam": "IPv4", 00:13:35.564 "traddr": "10.0.0.3", 00:13:35.564 "trsvcid": "4420" 00:13:35.564 }, 00:13:35.564 "peer_address": { 00:13:35.564 "trtype": "TCP", 00:13:35.564 "adrfam": "IPv4", 00:13:35.564 "traddr": "10.0.0.1", 00:13:35.564 "trsvcid": "59282" 00:13:35.564 }, 00:13:35.564 "auth": { 00:13:35.564 "state": "completed", 00:13:35.564 "digest": "sha512", 00:13:35.564 "dhgroup": "ffdhe2048" 00:13:35.564 } 00:13:35.564 } 00:13:35.564 ]' 00:13:35.564 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.564 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:35.564 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.564 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:35.564 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.564 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.564 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.564 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.131 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:13:36.131 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:13:36.699 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.699 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:36.699 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.699 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.699 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.699 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:36.699 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.699 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:36.699 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:36.958 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:13:36.958 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:36.958 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:36.958 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:36.958 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:36.958 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.958 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.958 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.958 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.958 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.958 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.958 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.958 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.217 00:13:37.217 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.217 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.217 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.476 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.477 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.477 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.477 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.477 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.477 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.477 { 00:13:37.477 "cntlid": 113, 00:13:37.477 "qid": 0, 00:13:37.477 "state": "enabled", 00:13:37.477 "thread": "nvmf_tgt_poll_group_000", 00:13:37.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:37.477 "listen_address": { 00:13:37.477 "trtype": "TCP", 00:13:37.477 "adrfam": "IPv4", 00:13:37.477 "traddr": "10.0.0.3", 00:13:37.477 "trsvcid": "4420" 00:13:37.477 }, 00:13:37.477 "peer_address": { 00:13:37.477 "trtype": "TCP", 00:13:37.477 "adrfam": "IPv4", 00:13:37.477 "traddr": "10.0.0.1", 00:13:37.477 "trsvcid": "59314" 00:13:37.477 }, 00:13:37.477 "auth": { 00:13:37.477 "state": "completed", 00:13:37.477 "digest": "sha512", 00:13:37.477 "dhgroup": "ffdhe3072" 00:13:37.477 } 00:13:37.477 } 00:13:37.477 ]' 00:13:37.477 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.736 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:37.736 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.736 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:37.736 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.736 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.736 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.736 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.994 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:13:37.994 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:13:38.931 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.931 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:38.931 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.931 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.931 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.931 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.931 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:38.931 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:39.188 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:13:39.188 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.188 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:39.188 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:39.188 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:39.188 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.188 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.188 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.188 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.188 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.188 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.189 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.189 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.758 00:13:39.758 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.758 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.758 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.016 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.017 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.017 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.017 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.017 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.017 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:40.017 { 00:13:40.017 "cntlid": 115, 00:13:40.017 "qid": 0, 00:13:40.017 "state": "enabled", 00:13:40.017 "thread": "nvmf_tgt_poll_group_000", 00:13:40.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:40.017 "listen_address": { 00:13:40.017 "trtype": "TCP", 00:13:40.017 "adrfam": "IPv4", 00:13:40.017 "traddr": "10.0.0.3", 00:13:40.017 "trsvcid": "4420" 00:13:40.017 }, 00:13:40.017 "peer_address": { 00:13:40.017 "trtype": "TCP", 00:13:40.017 "adrfam": "IPv4", 00:13:40.017 "traddr": "10.0.0.1", 00:13:40.017 "trsvcid": "56256" 00:13:40.017 }, 00:13:40.017 "auth": { 00:13:40.017 "state": "completed", 00:13:40.017 "digest": "sha512", 00:13:40.017 "dhgroup": "ffdhe3072" 00:13:40.017 } 00:13:40.017 } 00:13:40.017 ]' 00:13:40.017 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:40.017 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:40.017 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:40.017 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:40.017 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:40.017 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.017 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.017 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.276 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:13:40.276 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:13:41.212 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.212 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:41.212 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.212 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.212 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.212 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:41.212 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:41.212 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:41.470 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:13:41.470 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.470 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:41.470 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:41.471 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:41.471 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.471 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.471 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.471 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.471 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.471 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.471 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.471 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.730 00:13:41.730 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.730 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:41.730 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.989 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.989 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.989 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.989 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.989 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.989 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.989 { 00:13:41.989 "cntlid": 117, 00:13:41.989 "qid": 0, 00:13:41.989 "state": "enabled", 00:13:41.989 "thread": "nvmf_tgt_poll_group_000", 00:13:41.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:41.989 "listen_address": { 00:13:41.989 "trtype": "TCP", 00:13:41.989 "adrfam": "IPv4", 00:13:41.989 "traddr": "10.0.0.3", 00:13:41.989 "trsvcid": "4420" 00:13:41.989 }, 00:13:41.989 "peer_address": { 00:13:41.989 "trtype": "TCP", 00:13:41.989 "adrfam": "IPv4", 00:13:41.989 "traddr": "10.0.0.1", 00:13:41.989 "trsvcid": "56296" 00:13:41.989 }, 00:13:41.989 "auth": { 00:13:41.989 "state": "completed", 00:13:41.989 "digest": "sha512", 00:13:41.989 "dhgroup": "ffdhe3072" 00:13:41.989 } 00:13:41.989 } 00:13:41.989 ]' 00:13:41.989 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.989 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:41.989 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:42.248 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:42.248 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:42.248 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.248 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.248 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.507 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:13:42.507 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:13:43.074 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.074 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:43.074 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.074 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.074 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.074 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:43.074 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:43.074 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:43.642 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:13:43.642 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:43.642 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:43.642 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:43.642 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:43.642 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.642 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key3 00:13:43.642 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.642 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.642 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.642 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:43.642 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:43.642 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:43.901 00:13:43.901 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.901 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.901 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:44.161 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.161 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.161 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.161 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.161 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.161 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:44.161 { 00:13:44.161 "cntlid": 119, 00:13:44.161 "qid": 0, 00:13:44.161 "state": "enabled", 00:13:44.161 "thread": "nvmf_tgt_poll_group_000", 00:13:44.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:44.161 "listen_address": { 00:13:44.161 "trtype": "TCP", 00:13:44.161 "adrfam": "IPv4", 00:13:44.161 "traddr": "10.0.0.3", 00:13:44.161 "trsvcid": "4420" 00:13:44.161 }, 00:13:44.161 "peer_address": { 00:13:44.161 "trtype": "TCP", 00:13:44.161 "adrfam": "IPv4", 00:13:44.161 "traddr": "10.0.0.1", 00:13:44.161 "trsvcid": "56320" 00:13:44.161 }, 00:13:44.161 "auth": { 00:13:44.161 "state": "completed", 00:13:44.161 "digest": "sha512", 00:13:44.161 "dhgroup": "ffdhe3072" 00:13:44.161 } 00:13:44.161 } 00:13:44.161 ]' 00:13:44.161 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.161 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:44.161 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.161 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:44.161 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:44.420 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.420 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.420 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.680 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:13:44.680 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:13:45.248 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.248 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:45.248 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.248 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.248 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.248 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:45.248 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:45.248 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:45.248 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:45.814 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:13:45.814 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:45.814 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:45.814 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:45.814 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:45.814 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.814 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.814 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.814 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.814 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.814 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.814 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.814 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.073 00:13:46.073 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:46.073 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.073 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.332 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.332 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.332 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.332 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.591 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.591 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:46.591 { 00:13:46.591 "cntlid": 121, 00:13:46.591 "qid": 0, 00:13:46.591 "state": "enabled", 00:13:46.591 "thread": "nvmf_tgt_poll_group_000", 00:13:46.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:46.591 "listen_address": { 00:13:46.591 "trtype": "TCP", 00:13:46.591 "adrfam": "IPv4", 00:13:46.591 "traddr": "10.0.0.3", 00:13:46.591 "trsvcid": "4420" 00:13:46.591 }, 00:13:46.591 "peer_address": { 00:13:46.591 "trtype": "TCP", 00:13:46.591 "adrfam": "IPv4", 00:13:46.591 "traddr": "10.0.0.1", 00:13:46.591 "trsvcid": "56346" 00:13:46.591 }, 00:13:46.591 "auth": { 00:13:46.591 "state": "completed", 00:13:46.591 "digest": "sha512", 00:13:46.591 "dhgroup": "ffdhe4096" 00:13:46.591 } 00:13:46.591 } 00:13:46.591 ]' 00:13:46.591 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:46.591 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:46.591 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:46.591 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:46.591 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:46.591 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.591 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.591 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.850 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:13:46.850 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:13:47.420 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.724 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:47.724 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.724 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.724 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.724 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:47.724 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:47.724 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:47.983 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:13:47.983 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:47.983 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:47.983 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:47.983 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:47.983 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.983 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.983 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.983 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.983 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.983 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.983 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.983 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.242 00:13:48.242 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:48.242 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:48.242 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.500 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.500 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.500 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.500 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.500 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.500 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:48.500 { 00:13:48.500 "cntlid": 123, 00:13:48.500 "qid": 0, 00:13:48.500 "state": "enabled", 00:13:48.500 "thread": "nvmf_tgt_poll_group_000", 00:13:48.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:48.500 "listen_address": { 00:13:48.500 "trtype": "TCP", 00:13:48.500 "adrfam": "IPv4", 00:13:48.500 "traddr": "10.0.0.3", 00:13:48.500 "trsvcid": "4420" 00:13:48.500 }, 00:13:48.500 "peer_address": { 00:13:48.500 "trtype": "TCP", 00:13:48.500 "adrfam": "IPv4", 00:13:48.500 "traddr": "10.0.0.1", 00:13:48.500 "trsvcid": "55676" 00:13:48.500 }, 00:13:48.500 "auth": { 00:13:48.500 "state": "completed", 00:13:48.500 "digest": "sha512", 00:13:48.500 "dhgroup": "ffdhe4096" 00:13:48.500 } 00:13:48.500 } 00:13:48.500 ]' 00:13:48.500 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:48.500 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:48.500 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:48.758 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:48.758 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:48.759 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.759 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.759 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.017 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:13:49.017 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:13:49.587 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.587 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:49.587 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.587 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.587 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.587 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:49.587 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:49.587 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:49.849 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:13:49.849 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:49.849 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:49.849 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:49.849 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:49.849 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.849 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.849 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.849 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.849 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.849 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.849 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.849 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.415 00:13:50.415 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:50.415 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:50.415 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.673 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.673 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.673 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.673 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.673 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.673 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:50.673 { 00:13:50.673 "cntlid": 125, 00:13:50.673 "qid": 0, 00:13:50.673 "state": "enabled", 00:13:50.673 "thread": "nvmf_tgt_poll_group_000", 00:13:50.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:50.673 "listen_address": { 00:13:50.673 "trtype": "TCP", 00:13:50.673 "adrfam": "IPv4", 00:13:50.673 "traddr": "10.0.0.3", 00:13:50.673 "trsvcid": "4420" 00:13:50.673 }, 00:13:50.673 "peer_address": { 00:13:50.673 "trtype": "TCP", 00:13:50.673 "adrfam": "IPv4", 00:13:50.673 "traddr": "10.0.0.1", 00:13:50.673 "trsvcid": "55712" 00:13:50.673 }, 00:13:50.673 "auth": { 00:13:50.673 "state": "completed", 00:13:50.673 "digest": "sha512", 00:13:50.673 "dhgroup": "ffdhe4096" 00:13:50.673 } 00:13:50.673 } 00:13:50.673 ]' 00:13:50.673 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:50.673 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:50.673 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:50.673 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:50.673 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:50.673 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.673 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.673 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.240 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:13:51.240 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:13:51.822 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.822 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:51.822 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.822 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.822 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.822 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:51.822 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:51.822 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:52.082 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:13:52.082 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:52.082 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:52.082 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:52.082 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:52.082 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.082 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key3 00:13:52.082 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.082 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.082 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.082 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:52.082 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.082 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.649 00:13:52.649 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:52.649 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.649 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:52.907 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.907 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.907 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.907 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.907 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.907 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:52.907 { 00:13:52.907 "cntlid": 127, 00:13:52.907 "qid": 0, 00:13:52.907 "state": "enabled", 00:13:52.907 "thread": "nvmf_tgt_poll_group_000", 00:13:52.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:52.907 "listen_address": { 00:13:52.907 "trtype": "TCP", 00:13:52.907 "adrfam": "IPv4", 00:13:52.907 "traddr": "10.0.0.3", 00:13:52.907 "trsvcid": "4420" 00:13:52.907 }, 00:13:52.907 "peer_address": { 00:13:52.907 "trtype": "TCP", 00:13:52.907 "adrfam": "IPv4", 00:13:52.907 "traddr": "10.0.0.1", 00:13:52.907 "trsvcid": "55746" 00:13:52.907 }, 00:13:52.907 "auth": { 00:13:52.907 "state": "completed", 00:13:52.907 "digest": "sha512", 00:13:52.907 "dhgroup": "ffdhe4096" 00:13:52.907 } 00:13:52.907 } 00:13:52.907 ]' 00:13:52.908 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:52.908 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:52.908 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:52.908 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:52.908 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:52.908 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.908 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.908 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.166 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:13:53.166 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:13:54.099 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.099 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:54.099 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.099 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.099 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.099 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:54.100 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:54.100 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:54.100 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:54.358 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:13:54.358 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.358 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:54.358 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:54.358 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:54.358 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.358 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.358 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.358 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.358 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.358 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.358 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.358 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.617 00:13:54.617 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:54.617 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.617 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:54.875 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.875 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.875 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.875 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.876 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.876 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:54.876 { 00:13:54.876 "cntlid": 129, 00:13:54.876 "qid": 0, 00:13:54.876 "state": "enabled", 00:13:54.876 "thread": "nvmf_tgt_poll_group_000", 00:13:54.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:54.876 "listen_address": { 00:13:54.876 "trtype": "TCP", 00:13:54.876 "adrfam": "IPv4", 00:13:54.876 "traddr": "10.0.0.3", 00:13:54.876 "trsvcid": "4420" 00:13:54.876 }, 00:13:54.876 "peer_address": { 00:13:54.876 "trtype": "TCP", 00:13:54.876 "adrfam": "IPv4", 00:13:54.876 "traddr": "10.0.0.1", 00:13:54.876 "trsvcid": "55772" 00:13:54.876 }, 00:13:54.876 "auth": { 00:13:54.876 "state": "completed", 00:13:54.876 "digest": "sha512", 00:13:54.876 "dhgroup": "ffdhe6144" 00:13:54.876 } 00:13:54.876 } 00:13:54.876 ]' 00:13:54.876 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:55.134 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:55.134 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:55.134 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:55.134 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.134 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.134 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.134 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.469 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:13:55.469 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:13:56.065 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.065 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:56.065 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.065 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.323 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.323 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.323 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:56.323 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:56.581 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:13:56.581 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.581 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:56.581 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:56.581 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:56.581 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.581 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.581 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.581 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.581 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.581 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.581 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.581 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.149 00:13:57.149 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:57.149 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:57.149 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.407 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.407 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.407 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.407 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.407 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.407 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:57.407 { 00:13:57.407 "cntlid": 131, 00:13:57.407 "qid": 0, 00:13:57.407 "state": "enabled", 00:13:57.407 "thread": "nvmf_tgt_poll_group_000", 00:13:57.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:57.407 "listen_address": { 00:13:57.407 "trtype": "TCP", 00:13:57.407 "adrfam": "IPv4", 00:13:57.407 "traddr": "10.0.0.3", 00:13:57.407 "trsvcid": "4420" 00:13:57.407 }, 00:13:57.407 "peer_address": { 00:13:57.407 "trtype": "TCP", 00:13:57.407 "adrfam": "IPv4", 00:13:57.407 "traddr": "10.0.0.1", 00:13:57.407 "trsvcid": "55810" 00:13:57.407 }, 00:13:57.407 "auth": { 00:13:57.407 "state": "completed", 00:13:57.407 "digest": "sha512", 00:13:57.407 "dhgroup": "ffdhe6144" 00:13:57.407 } 00:13:57.407 } 00:13:57.407 ]' 00:13:57.407 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:57.407 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:57.407 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:57.407 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:57.407 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:57.407 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.407 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.407 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.665 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:13:57.665 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:13:58.597 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.597 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:13:58.597 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.597 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.597 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.597 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:58.597 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:58.597 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:58.855 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:13:58.855 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:58.855 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:58.855 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:58.855 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:58.855 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.855 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.855 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.855 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.855 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.855 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.855 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.855 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.114 00:13:59.114 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:59.114 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:59.114 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.683 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.683 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.683 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.683 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.683 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.683 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:59.683 { 00:13:59.683 "cntlid": 133, 00:13:59.683 "qid": 0, 00:13:59.683 "state": "enabled", 00:13:59.683 "thread": "nvmf_tgt_poll_group_000", 00:13:59.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:13:59.683 "listen_address": { 00:13:59.683 "trtype": "TCP", 00:13:59.683 "adrfam": "IPv4", 00:13:59.683 "traddr": "10.0.0.3", 00:13:59.683 "trsvcid": "4420" 00:13:59.683 }, 00:13:59.683 "peer_address": { 00:13:59.683 "trtype": "TCP", 00:13:59.683 "adrfam": "IPv4", 00:13:59.683 "traddr": "10.0.0.1", 00:13:59.683 "trsvcid": "43780" 00:13:59.683 }, 00:13:59.683 "auth": { 00:13:59.683 "state": "completed", 00:13:59.683 "digest": "sha512", 00:13:59.683 "dhgroup": "ffdhe6144" 00:13:59.683 } 00:13:59.683 } 00:13:59.683 ]' 00:13:59.683 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.683 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:59.684 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.684 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:59.684 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:59.684 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.684 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.684 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.943 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:13:59.943 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:14:00.881 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.881 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:14:00.881 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.881 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.881 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.881 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:00.881 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:00.881 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:01.140 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:14:01.140 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:01.140 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:01.140 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:01.140 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:01.140 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.140 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key3 00:14:01.140 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.140 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.140 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.140 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:01.140 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:01.140 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:01.707 00:14:01.707 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.707 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.707 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.707 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.707 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.707 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.707 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.966 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.966 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.966 { 00:14:01.966 "cntlid": 135, 00:14:01.966 "qid": 0, 00:14:01.966 "state": "enabled", 00:14:01.966 "thread": "nvmf_tgt_poll_group_000", 00:14:01.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:14:01.966 "listen_address": { 00:14:01.966 "trtype": "TCP", 00:14:01.966 "adrfam": "IPv4", 00:14:01.966 "traddr": "10.0.0.3", 00:14:01.966 "trsvcid": "4420" 00:14:01.966 }, 00:14:01.966 "peer_address": { 00:14:01.966 "trtype": "TCP", 00:14:01.966 "adrfam": "IPv4", 00:14:01.966 "traddr": "10.0.0.1", 00:14:01.966 "trsvcid": "43802" 00:14:01.966 }, 00:14:01.966 "auth": { 00:14:01.966 "state": "completed", 00:14:01.966 "digest": "sha512", 00:14:01.966 "dhgroup": "ffdhe6144" 00:14:01.966 } 00:14:01.966 } 00:14:01.966 ]' 00:14:01.966 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.966 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:01.966 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.966 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:01.966 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.966 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.966 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.966 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.225 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:14:02.225 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:14:02.795 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.795 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:14:02.795 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.795 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.123 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.123 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:03.123 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:03.123 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:03.123 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:03.382 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:14:03.382 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:03.382 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:03.382 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:03.382 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:03.382 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.382 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.382 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.382 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.382 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.382 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.382 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.382 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.949 00:14:03.949 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.949 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:03.950 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.209 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.209 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.209 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.209 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.209 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.209 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:04.209 { 00:14:04.209 "cntlid": 137, 00:14:04.209 "qid": 0, 00:14:04.209 "state": "enabled", 00:14:04.209 "thread": "nvmf_tgt_poll_group_000", 00:14:04.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:14:04.209 "listen_address": { 00:14:04.209 "trtype": "TCP", 00:14:04.209 "adrfam": "IPv4", 00:14:04.209 "traddr": "10.0.0.3", 00:14:04.209 "trsvcid": "4420" 00:14:04.209 }, 00:14:04.209 "peer_address": { 00:14:04.209 "trtype": "TCP", 00:14:04.209 "adrfam": "IPv4", 00:14:04.209 "traddr": "10.0.0.1", 00:14:04.209 "trsvcid": "43840" 00:14:04.209 }, 00:14:04.209 "auth": { 00:14:04.209 "state": "completed", 00:14:04.209 "digest": "sha512", 00:14:04.209 "dhgroup": "ffdhe8192" 00:14:04.209 } 00:14:04.209 } 00:14:04.209 ]' 00:14:04.209 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:04.209 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:04.209 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:04.209 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:04.209 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:04.209 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.209 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.209 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.777 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:14:04.777 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:14:05.345 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.345 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:14:05.345 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.345 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.345 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.345 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:05.345 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:05.345 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:05.604 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:14:05.604 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.604 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:05.604 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:05.604 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:05.604 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.604 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.604 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.604 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.604 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.604 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.604 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.604 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.171 00:14:06.171 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.171 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.171 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.430 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.430 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.430 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.430 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.430 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.430 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.430 { 00:14:06.430 "cntlid": 139, 00:14:06.430 "qid": 0, 00:14:06.430 "state": "enabled", 00:14:06.430 "thread": "nvmf_tgt_poll_group_000", 00:14:06.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:14:06.430 "listen_address": { 00:14:06.430 "trtype": "TCP", 00:14:06.430 "adrfam": "IPv4", 00:14:06.430 "traddr": "10.0.0.3", 00:14:06.430 "trsvcid": "4420" 00:14:06.430 }, 00:14:06.430 "peer_address": { 00:14:06.430 "trtype": "TCP", 00:14:06.430 "adrfam": "IPv4", 00:14:06.430 "traddr": "10.0.0.1", 00:14:06.430 "trsvcid": "43878" 00:14:06.430 }, 00:14:06.430 "auth": { 00:14:06.430 "state": "completed", 00:14:06.430 "digest": "sha512", 00:14:06.430 "dhgroup": "ffdhe8192" 00:14:06.430 } 00:14:06.430 } 00:14:06.430 ]' 00:14:06.430 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.689 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:06.689 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.689 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:06.689 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.689 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.689 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.689 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.948 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:14:06.948 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: --dhchap-ctrl-secret DHHC-1:02:NWE0OTVjMWJmODA3MTliNWRjMjk2YjYyZTg2MTIyZmI4YzBjYTVkYzk1NzQwMDIzDDJF0Q==: 00:14:07.515 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.774 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:14:07.774 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.774 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.774 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.774 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:07.774 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:07.774 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:08.033 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:14:08.033 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:08.033 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:08.033 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:08.033 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:08.033 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.033 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.033 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.033 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.033 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.033 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.033 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.033 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.601 00:14:08.601 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.601 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.601 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.862 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.862 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.862 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.862 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.862 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.862 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:08.862 { 00:14:08.862 "cntlid": 141, 00:14:08.862 "qid": 0, 00:14:08.862 "state": "enabled", 00:14:08.862 "thread": "nvmf_tgt_poll_group_000", 00:14:08.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:14:08.862 "listen_address": { 00:14:08.862 "trtype": "TCP", 00:14:08.862 "adrfam": "IPv4", 00:14:08.862 "traddr": "10.0.0.3", 00:14:08.862 "trsvcid": "4420" 00:14:08.862 }, 00:14:08.862 "peer_address": { 00:14:08.862 "trtype": "TCP", 00:14:08.862 "adrfam": "IPv4", 00:14:08.862 "traddr": "10.0.0.1", 00:14:08.862 "trsvcid": "54784" 00:14:08.862 }, 00:14:08.862 "auth": { 00:14:08.862 "state": "completed", 00:14:08.862 "digest": "sha512", 00:14:08.862 "dhgroup": "ffdhe8192" 00:14:08.862 } 00:14:08.862 } 00:14:08.862 ]' 00:14:08.862 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:08.862 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:08.862 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:09.122 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:09.122 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:09.122 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.122 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.122 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.382 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:14:09.382 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:01:NjIyMmJhYTQ5NDBmOGViNmViNzIyMTc1YWE2MGM0N2XY0Ota: 00:14:09.951 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.951 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:14:09.951 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.951 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.951 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.951 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:09.951 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:09.951 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:10.211 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:14:10.211 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:10.211 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:10.211 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:10.211 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:10.211 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.211 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key3 00:14:10.211 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.211 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.211 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.211 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:10.211 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:10.211 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:10.786 00:14:10.786 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.786 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.786 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:11.046 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.046 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.046 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.046 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.046 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.046 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:11.046 { 00:14:11.046 "cntlid": 143, 00:14:11.046 "qid": 0, 00:14:11.046 "state": "enabled", 00:14:11.046 "thread": "nvmf_tgt_poll_group_000", 00:14:11.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:14:11.046 "listen_address": { 00:14:11.046 "trtype": "TCP", 00:14:11.046 "adrfam": "IPv4", 00:14:11.046 "traddr": "10.0.0.3", 00:14:11.046 "trsvcid": "4420" 00:14:11.046 }, 00:14:11.046 "peer_address": { 00:14:11.046 "trtype": "TCP", 00:14:11.046 "adrfam": "IPv4", 00:14:11.046 "traddr": "10.0.0.1", 00:14:11.046 "trsvcid": "54820" 00:14:11.046 }, 00:14:11.046 "auth": { 00:14:11.046 "state": "completed", 00:14:11.046 "digest": "sha512", 00:14:11.046 "dhgroup": "ffdhe8192" 00:14:11.046 } 00:14:11.046 } 00:14:11.046 ]' 00:14:11.046 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:11.305 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:11.305 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:11.305 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:11.305 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.305 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.305 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.305 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.565 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:14:11.565 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:14:12.501 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.501 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:14:12.501 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.501 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.501 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.501 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:12.501 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:14:12.501 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:12.501 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:12.501 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:12.501 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:12.761 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:14:12.761 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.761 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:12.761 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:12.761 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:12.761 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.761 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.761 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.761 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.761 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.761 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.761 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.761 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.330 00:14:13.330 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:13.330 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.330 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:13.589 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.589 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.589 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.589 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.589 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.589 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.589 { 00:14:13.589 "cntlid": 145, 00:14:13.589 "qid": 0, 00:14:13.589 "state": "enabled", 00:14:13.589 "thread": "nvmf_tgt_poll_group_000", 00:14:13.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:14:13.589 "listen_address": { 00:14:13.589 "trtype": "TCP", 00:14:13.589 "adrfam": "IPv4", 00:14:13.589 "traddr": "10.0.0.3", 00:14:13.589 "trsvcid": "4420" 00:14:13.589 }, 00:14:13.589 "peer_address": { 00:14:13.589 "trtype": "TCP", 00:14:13.589 "adrfam": "IPv4", 00:14:13.589 "traddr": "10.0.0.1", 00:14:13.589 "trsvcid": "54848" 00:14:13.589 }, 00:14:13.589 "auth": { 00:14:13.589 "state": "completed", 00:14:13.589 "digest": "sha512", 00:14:13.589 "dhgroup": "ffdhe8192" 00:14:13.589 } 00:14:13.589 } 00:14:13.589 ]' 00:14:13.589 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.848 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:13.848 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.848 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:13.848 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.848 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.848 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.848 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.108 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:14:14.108 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:00:OTU3NDcyMDNlNGIyMmI1NDk0MGVhMzkzNTMzNzE1MzFlNDhkZmI0N2I5MjkxZWEx3KW0SQ==: --dhchap-ctrl-secret DHHC-1:03:NTk4ZGVmNDFkZTU2ZDMwMTVlOWJhYTAwZDA4NzNkYTk1M2MxZmYxYzk0MWZjNTY2NDZlMzY4ZDUzYmYwOTQ3NOt8iug=: 00:14:14.676 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.676 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:14:14.676 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.676 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.676 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.676 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 00:14:14.676 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.676 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.676 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.676 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:14:14.676 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:14.676 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:14:14.676 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:14.676 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:14.676 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:14.676 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:14.677 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:14:14.677 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:14.677 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:15.614 request: 00:14:15.614 { 00:14:15.614 "name": "nvme0", 00:14:15.614 "trtype": "tcp", 00:14:15.614 "traddr": "10.0.0.3", 00:14:15.614 "adrfam": "ipv4", 00:14:15.614 "trsvcid": "4420", 00:14:15.614 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:15.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:14:15.614 "prchk_reftag": false, 00:14:15.614 "prchk_guard": false, 00:14:15.614 "hdgst": false, 00:14:15.614 "ddgst": false, 00:14:15.614 "dhchap_key": "key2", 00:14:15.614 "allow_unrecognized_csi": false, 00:14:15.614 "method": "bdev_nvme_attach_controller", 00:14:15.614 "req_id": 1 00:14:15.614 } 00:14:15.614 Got JSON-RPC error response 00:14:15.614 response: 00:14:15.614 { 00:14:15.614 "code": -5, 00:14:15.614 "message": "Input/output error" 00:14:15.614 } 00:14:15.614 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:15.614 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:15.614 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:15.614 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:15.614 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:14:15.614 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.614 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.614 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.614 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.614 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.614 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.614 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.614 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:15.614 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:15.614 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:15.614 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:15.614 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:15.614 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:15.614 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:15.614 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:15.614 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:15.614 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:16.183 request: 00:14:16.183 { 00:14:16.183 "name": "nvme0", 00:14:16.183 "trtype": "tcp", 00:14:16.183 "traddr": "10.0.0.3", 00:14:16.183 "adrfam": "ipv4", 00:14:16.183 "trsvcid": "4420", 00:14:16.183 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:16.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:14:16.183 "prchk_reftag": false, 00:14:16.183 "prchk_guard": false, 00:14:16.183 "hdgst": false, 00:14:16.183 "ddgst": false, 00:14:16.183 "dhchap_key": "key1", 00:14:16.183 "dhchap_ctrlr_key": "ckey2", 00:14:16.183 "allow_unrecognized_csi": false, 00:14:16.183 "method": "bdev_nvme_attach_controller", 00:14:16.183 "req_id": 1 00:14:16.183 } 00:14:16.183 Got JSON-RPC error response 00:14:16.183 response: 00:14:16.183 { 00:14:16.183 "code": -5, 00:14:16.183 "message": "Input/output error" 00:14:16.183 } 00:14:16.183 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:16.183 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:16.183 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:16.183 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:16.183 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:14:16.183 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.183 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.184 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.184 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 00:14:16.184 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.184 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.184 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.184 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.184 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:16.184 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.184 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:16.184 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:16.184 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:16.184 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:16.184 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.184 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.184 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.754 request: 00:14:16.754 { 00:14:16.754 "name": "nvme0", 00:14:16.754 "trtype": "tcp", 00:14:16.754 "traddr": "10.0.0.3", 00:14:16.754 "adrfam": "ipv4", 00:14:16.754 "trsvcid": "4420", 00:14:16.754 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:16.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:14:16.754 "prchk_reftag": false, 00:14:16.754 "prchk_guard": false, 00:14:16.754 "hdgst": false, 00:14:16.754 "ddgst": false, 00:14:16.754 "dhchap_key": "key1", 00:14:16.754 "dhchap_ctrlr_key": "ckey1", 00:14:16.754 "allow_unrecognized_csi": false, 00:14:16.754 "method": "bdev_nvme_attach_controller", 00:14:16.754 "req_id": 1 00:14:16.754 } 00:14:16.754 Got JSON-RPC error response 00:14:16.754 response: 00:14:16.754 { 00:14:16.754 "code": -5, 00:14:16.754 "message": "Input/output error" 00:14:16.754 } 00:14:16.754 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:16.754 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:16.754 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:16.754 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:16.754 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:14:16.754 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.754 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.754 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.754 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67176 00:14:16.754 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67176 ']' 00:14:16.754 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67176 00:14:16.754 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:16.754 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.754 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67176 00:14:16.754 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:16.754 killing process with pid 67176 00:14:16.754 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:16.754 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67176' 00:14:16.754 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67176 00:14:16.754 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67176 00:14:17.014 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:14:17.014 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:17.014 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:17.014 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.014 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70325 00:14:17.014 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70325 00:14:17.014 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70325 ']' 00:14:17.014 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.014 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:14:17.014 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.014 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.014 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.014 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.954 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:17.954 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:17.954 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:17.954 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:17.954 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.954 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.954 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:17.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.954 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70325 00:14:17.954 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70325 ']' 00:14:17.954 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.954 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.954 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.954 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.954 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.527 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:18.527 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:18.527 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:14:18.527 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.527 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.527 null0 00:14:18.527 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.527 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:18.527 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8bh 00:14:18.527 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.527 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.527 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.527 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.szg ]] 00:14:18.527 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.szg 00:14:18.527 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.527 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.527 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.527 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:18.527 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dYY 00:14:18.527 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.527 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.79M ]] 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.79M 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.63k 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.JVe ]] 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JVe 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.MGW 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key3 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:18.528 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:19.906 nvme0n1 00:14:19.906 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.906 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.906 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.906 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.906 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.906 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.906 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.906 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.906 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.906 { 00:14:19.906 "cntlid": 1, 00:14:19.906 "qid": 0, 00:14:19.906 "state": "enabled", 00:14:19.906 "thread": "nvmf_tgt_poll_group_000", 00:14:19.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:14:19.906 "listen_address": { 00:14:19.906 "trtype": "TCP", 00:14:19.906 "adrfam": "IPv4", 00:14:19.906 "traddr": "10.0.0.3", 00:14:19.906 "trsvcid": "4420" 00:14:19.906 }, 00:14:19.906 "peer_address": { 00:14:19.906 "trtype": "TCP", 00:14:19.906 "adrfam": "IPv4", 00:14:19.906 "traddr": "10.0.0.1", 00:14:19.906 "trsvcid": "39348" 00:14:19.906 }, 00:14:19.906 "auth": { 00:14:19.906 "state": "completed", 00:14:19.906 "digest": "sha512", 00:14:19.906 "dhgroup": "ffdhe8192" 00:14:19.906 } 00:14:19.906 } 00:14:19.906 ]' 00:14:19.906 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.906 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:19.906 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:20.165 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:20.165 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:20.165 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.165 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.165 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.424 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:14:20.424 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:14:20.992 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.992 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:14:20.992 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.992 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.992 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.992 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key3 00:14:20.992 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.992 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.251 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.251 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:14:21.251 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:14:21.510 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:14:21.510 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:21.510 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:14:21.510 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:21.510 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.510 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:21.510 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.510 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:21.510 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:21.510 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:21.770 request: 00:14:21.770 { 00:14:21.770 "name": "nvme0", 00:14:21.770 "trtype": "tcp", 00:14:21.770 "traddr": "10.0.0.3", 00:14:21.770 "adrfam": "ipv4", 00:14:21.770 "trsvcid": "4420", 00:14:21.770 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:21.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:14:21.770 "prchk_reftag": false, 00:14:21.770 "prchk_guard": false, 00:14:21.770 "hdgst": false, 00:14:21.770 "ddgst": false, 00:14:21.770 "dhchap_key": "key3", 00:14:21.770 "allow_unrecognized_csi": false, 00:14:21.770 "method": "bdev_nvme_attach_controller", 00:14:21.770 "req_id": 1 00:14:21.770 } 00:14:21.770 Got JSON-RPC error response 00:14:21.770 response: 00:14:21.770 { 00:14:21.770 "code": -5, 00:14:21.770 "message": "Input/output error" 00:14:21.770 } 00:14:21.770 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:21.770 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:21.770 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:21.770 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:21.770 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:14:21.770 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:14:21.770 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:21.770 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:22.028 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:14:22.028 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:22.028 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:14:22.028 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:22.029 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:22.029 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:22.029 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:22.029 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:22.029 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:22.029 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:22.287 request: 00:14:22.287 { 00:14:22.287 "name": "nvme0", 00:14:22.287 "trtype": "tcp", 00:14:22.287 "traddr": "10.0.0.3", 00:14:22.287 "adrfam": "ipv4", 00:14:22.287 "trsvcid": "4420", 00:14:22.287 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:22.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:14:22.287 "prchk_reftag": false, 00:14:22.287 "prchk_guard": false, 00:14:22.287 "hdgst": false, 00:14:22.287 "ddgst": false, 00:14:22.287 "dhchap_key": "key3", 00:14:22.287 "allow_unrecognized_csi": false, 00:14:22.288 "method": "bdev_nvme_attach_controller", 00:14:22.288 "req_id": 1 00:14:22.288 } 00:14:22.288 Got JSON-RPC error response 00:14:22.288 response: 00:14:22.288 { 00:14:22.288 "code": -5, 00:14:22.288 "message": "Input/output error" 00:14:22.288 } 00:14:22.288 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:22.288 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:22.288 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:22.288 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:22.288 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:14:22.288 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:14:22.288 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:14:22.288 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:22.288 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:22.288 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:22.547 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:14:22.548 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.548 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.548 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.548 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:14:22.548 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.548 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.548 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.548 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:22.548 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:22.548 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:22.548 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:22.548 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:22.548 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:22.548 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:22.548 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:22.548 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:22.548 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:23.116 request: 00:14:23.116 { 00:14:23.116 "name": "nvme0", 00:14:23.116 "trtype": "tcp", 00:14:23.116 "traddr": "10.0.0.3", 00:14:23.116 "adrfam": "ipv4", 00:14:23.116 "trsvcid": "4420", 00:14:23.116 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:23.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:14:23.116 "prchk_reftag": false, 00:14:23.116 "prchk_guard": false, 00:14:23.116 "hdgst": false, 00:14:23.116 "ddgst": false, 00:14:23.116 "dhchap_key": "key0", 00:14:23.116 "dhchap_ctrlr_key": "key1", 00:14:23.116 "allow_unrecognized_csi": false, 00:14:23.116 "method": "bdev_nvme_attach_controller", 00:14:23.116 "req_id": 1 00:14:23.116 } 00:14:23.116 Got JSON-RPC error response 00:14:23.116 response: 00:14:23.116 { 00:14:23.116 "code": -5, 00:14:23.116 "message": "Input/output error" 00:14:23.116 } 00:14:23.116 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:23.116 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:23.116 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:23.116 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:23.116 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:14:23.116 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:14:23.116 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:14:23.409 nvme0n1 00:14:23.409 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:14:23.409 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:14:23.409 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.668 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.668 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.668 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.926 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 00:14:23.926 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.926 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.926 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.926 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:14:23.926 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:23.926 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:24.862 nvme0n1 00:14:25.122 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:14:25.122 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.122 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:14:25.381 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.381 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:25.381 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.381 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.381 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.381 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:14:25.381 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:14:25.381 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.641 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.641 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:14:25.641 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid 6147973c-080a-4377-b1e7-85172bdc559a -l 0 --dhchap-secret DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: --dhchap-ctrl-secret DHHC-1:03:YjI2NTI4NDU3NDhjYTI3NmQzY2ZjMTc5MTY2YTY5Yjk4MDhkYTI4ZmFkMTZlODY3MTRlM2Y0Y2QyMDA1MDk1YQRcuxc=: 00:14:26.210 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:14:26.210 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:14:26.210 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:14:26.210 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:14:26.210 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:14:26.210 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:14:26.210 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:14:26.210 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.210 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.780 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:14:26.780 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:26.780 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:14:26.780 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:26.780 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.780 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:26.780 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.780 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:14:26.780 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:26.780 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:27.348 request: 00:14:27.348 { 00:14:27.348 "name": "nvme0", 00:14:27.348 "trtype": "tcp", 00:14:27.348 "traddr": "10.0.0.3", 00:14:27.348 "adrfam": "ipv4", 00:14:27.348 "trsvcid": "4420", 00:14:27.348 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:27.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a", 00:14:27.348 "prchk_reftag": false, 00:14:27.348 "prchk_guard": false, 00:14:27.348 "hdgst": false, 00:14:27.348 "ddgst": false, 00:14:27.348 "dhchap_key": "key1", 00:14:27.348 "allow_unrecognized_csi": false, 00:14:27.348 "method": "bdev_nvme_attach_controller", 00:14:27.348 "req_id": 1 00:14:27.348 } 00:14:27.349 Got JSON-RPC error response 00:14:27.349 response: 00:14:27.349 { 00:14:27.349 "code": -5, 00:14:27.349 "message": "Input/output error" 00:14:27.349 } 00:14:27.349 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:27.349 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:27.349 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:27.349 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:27.349 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:27.349 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:27.349 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:28.286 nvme0n1 00:14:28.286 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:14:28.286 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.286 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:14:28.544 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.544 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.544 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.112 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:14:29.112 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.112 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.112 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.113 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:14:29.113 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:29.113 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:29.372 nvme0n1 00:14:29.372 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:14:29.372 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:14:29.372 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.631 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.631 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.631 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.890 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:29.890 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.890 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.890 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.890 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: '' 2s 00:14:29.890 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:29.890 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:29.890 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: 00:14:29.890 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:14:29.890 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:29.891 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:29.891 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: ]] 00:14:29.891 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MzBjZjM4ZjliNDdhM2M1ODZkZGI5MTkyYTc3YmQ5ZDk5N/PI: 00:14:29.891 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:14:29.891 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:29.891 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:32.426 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:14:32.426 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:32.426 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:32.426 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:32.426 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:32.426 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:32.426 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:32.426 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key1 --dhchap-ctrlr-key key2 00:14:32.426 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.426 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.426 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.426 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: 2s 00:14:32.426 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:32.426 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:32.426 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:14:32.426 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: 00:14:32.427 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:32.427 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:32.427 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:14:32.427 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: ]] 00:14:32.427 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ODI3OWRiYjk0OWU3ZjExZTVmNDAyOTI1NDNlNzhlODRiMDgxNjRhNTEyYzBjMWI4F4nkeQ==: 00:14:32.427 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:32.427 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:34.334 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:14:34.334 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:34.334 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:34.334 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:34.334 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:34.334 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:34.334 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:34.334 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.334 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:34.334 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.334 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.334 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.334 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:34.334 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:34.334 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:34.902 nvme0n1 00:14:34.902 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:34.902 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.902 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.902 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.902 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:34.902 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:35.854 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:14:35.854 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.854 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:14:35.854 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.854 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:14:35.854 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.854 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.854 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.854 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:14:35.854 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:14:36.422 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:14:36.422 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.422 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:14:36.682 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.682 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:36.682 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.682 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.682 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.682 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:36.682 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:36.682 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:36.682 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:36.682 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:36.682 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:36.682 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:36.682 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:36.682 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:37.251 request: 00:14:37.251 { 00:14:37.251 "name": "nvme0", 00:14:37.251 "dhchap_key": "key1", 00:14:37.251 "dhchap_ctrlr_key": "key3", 00:14:37.251 "method": "bdev_nvme_set_keys", 00:14:37.251 "req_id": 1 00:14:37.251 } 00:14:37.251 Got JSON-RPC error response 00:14:37.251 response: 00:14:37.251 { 00:14:37.251 "code": -13, 00:14:37.251 "message": "Permission denied" 00:14:37.251 } 00:14:37.251 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:37.251 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:37.251 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:37.251 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:37.251 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:37.251 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.251 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:37.510 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:14:37.510 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:14:38.887 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:38.887 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.887 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:38.887 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:14:38.887 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:38.887 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.887 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.887 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.887 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:38.887 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:38.887 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:39.823 nvme0n1 00:14:39.823 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:39.823 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.823 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.823 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.823 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:39.823 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:39.823 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:39.823 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:39.823 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:39.823 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:39.823 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:39.823 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:39.823 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:40.761 request: 00:14:40.761 { 00:14:40.761 "name": "nvme0", 00:14:40.761 "dhchap_key": "key2", 00:14:40.761 "dhchap_ctrlr_key": "key0", 00:14:40.761 "method": "bdev_nvme_set_keys", 00:14:40.761 "req_id": 1 00:14:40.761 } 00:14:40.761 Got JSON-RPC error response 00:14:40.761 response: 00:14:40.761 { 00:14:40.761 "code": -13, 00:14:40.761 "message": "Permission denied" 00:14:40.761 } 00:14:40.761 10:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:40.761 10:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:40.761 10:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:40.761 10:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:40.761 10:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:40.761 10:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.761 10:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:41.019 10:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:14:41.019 10:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:14:41.955 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:41.955 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:41.955 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.212 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:14:42.212 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:14:42.212 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:14:42.212 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67208 00:14:42.212 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67208 ']' 00:14:42.212 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67208 00:14:42.212 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:42.212 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.212 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67208 00:14:42.212 killing process with pid 67208 00:14:42.212 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:42.212 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:42.212 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67208' 00:14:42.212 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67208 00:14:42.212 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67208 00:14:42.778 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:42.778 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:42.778 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:14:42.778 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:42.778 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:14:42.778 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:42.778 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:42.778 rmmod nvme_tcp 00:14:42.778 rmmod nvme_fabrics 00:14:42.778 rmmod nvme_keyring 00:14:42.778 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:42.778 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:14:42.778 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:14:42.778 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70325 ']' 00:14:42.778 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70325 00:14:42.778 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70325 ']' 00:14:42.778 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70325 00:14:42.778 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:42.778 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.778 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70325 00:14:42.778 killing process with pid 70325 00:14:42.778 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:42.778 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:42.778 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70325' 00:14:42.778 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70325 00:14:42.778 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70325 00:14:43.037 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:43.037 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:43.037 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:43.037 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:14:43.037 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:14:43.037 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:14:43.038 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:43.038 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:43.038 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:43.038 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:43.038 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:43.038 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:43.038 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:43.038 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:43.038 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:43.038 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:43.038 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:43.038 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:43.038 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:43.038 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:43.038 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:43.038 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:43.297 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:43.297 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.297 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:43.297 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.297 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:14:43.297 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.8bh /tmp/spdk.key-sha256.dYY /tmp/spdk.key-sha384.63k /tmp/spdk.key-sha512.MGW /tmp/spdk.key-sha512.szg /tmp/spdk.key-sha384.79M /tmp/spdk.key-sha256.JVe '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:43.297 00:14:43.297 real 3m19.248s 00:14:43.297 user 7m57.501s 00:14:43.297 sys 0m30.782s 00:14:43.297 ************************************ 00:14:43.297 END TEST nvmf_auth_target 00:14:43.297 ************************************ 00:14:43.297 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.297 10:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.297 10:08:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:14:43.297 10:08:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:43.297 10:08:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:43.297 10:08:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.297 10:08:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:43.297 ************************************ 00:14:43.297 START TEST nvmf_bdevio_no_huge 00:14:43.297 ************************************ 00:14:43.297 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:43.297 * Looking for test storage... 00:14:43.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:43.297 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:43.297 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:14:43.297 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:43.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.556 --rc genhtml_branch_coverage=1 00:14:43.556 --rc genhtml_function_coverage=1 00:14:43.556 --rc genhtml_legend=1 00:14:43.556 --rc geninfo_all_blocks=1 00:14:43.556 --rc geninfo_unexecuted_blocks=1 00:14:43.556 00:14:43.556 ' 00:14:43.556 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:43.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.556 --rc genhtml_branch_coverage=1 00:14:43.556 --rc genhtml_function_coverage=1 00:14:43.556 --rc genhtml_legend=1 00:14:43.556 --rc geninfo_all_blocks=1 00:14:43.556 --rc geninfo_unexecuted_blocks=1 00:14:43.556 00:14:43.556 ' 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:43.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.557 --rc genhtml_branch_coverage=1 00:14:43.557 --rc genhtml_function_coverage=1 00:14:43.557 --rc genhtml_legend=1 00:14:43.557 --rc geninfo_all_blocks=1 00:14:43.557 --rc geninfo_unexecuted_blocks=1 00:14:43.557 00:14:43.557 ' 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:43.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.557 --rc genhtml_branch_coverage=1 00:14:43.557 --rc genhtml_function_coverage=1 00:14:43.557 --rc genhtml_legend=1 00:14:43.557 --rc geninfo_all_blocks=1 00:14:43.557 --rc geninfo_unexecuted_blocks=1 00:14:43.557 00:14:43.557 ' 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:43.557 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.557 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:43.558 Cannot find device "nvmf_init_br" 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:43.558 Cannot find device "nvmf_init_br2" 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:43.558 Cannot find device "nvmf_tgt_br" 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:43.558 Cannot find device "nvmf_tgt_br2" 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:43.558 Cannot find device "nvmf_init_br" 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:43.558 Cannot find device "nvmf_init_br2" 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:43.558 Cannot find device "nvmf_tgt_br" 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:43.558 Cannot find device "nvmf_tgt_br2" 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:43.558 Cannot find device "nvmf_br" 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:43.558 Cannot find device "nvmf_init_if" 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:43.558 Cannot find device "nvmf_init_if2" 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:43.558 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:43.558 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:43.558 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:43.817 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:43.817 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:43.817 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:43.817 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:43.817 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:43.817 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:43.817 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:43.818 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:43.818 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:14:43.818 00:14:43.818 --- 10.0.0.3 ping statistics --- 00:14:43.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.818 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:43.818 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:43.818 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:14:43.818 00:14:43.818 --- 10.0.0.4 ping statistics --- 00:14:43.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.818 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:43.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:43.818 00:14:43.818 --- 10.0.0.1 ping statistics --- 00:14:43.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.818 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:43.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:14:43.818 00:14:43.818 --- 10.0.0.2 ping statistics --- 00:14:43.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.818 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=70986 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 70986 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 70986 ']' 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.818 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:44.077 [2024-11-19 10:08:57.716250] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:14:44.077 [2024-11-19 10:08:57.716366] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:44.077 [2024-11-19 10:08:57.889127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:44.336 [2024-11-19 10:08:57.968482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.336 [2024-11-19 10:08:57.968782] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.336 [2024-11-19 10:08:57.968979] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.336 [2024-11-19 10:08:57.969233] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.336 [2024-11-19 10:08:57.969275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.336 [2024-11-19 10:08:57.970084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:44.336 [2024-11-19 10:08:57.970209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:44.336 [2024-11-19 10:08:57.970138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:44.336 [2024-11-19 10:08:57.970212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:44.336 [2024-11-19 10:08:57.976191] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:45.273 [2024-11-19 10:08:58.841295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:45.273 Malloc0 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:45.273 [2024-11-19 10:08:58.889505] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:14:45.273 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:45.274 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:45.274 { 00:14:45.274 "params": { 00:14:45.274 "name": "Nvme$subsystem", 00:14:45.274 "trtype": "$TEST_TRANSPORT", 00:14:45.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:45.274 "adrfam": "ipv4", 00:14:45.274 "trsvcid": "$NVMF_PORT", 00:14:45.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:45.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:45.274 "hdgst": ${hdgst:-false}, 00:14:45.274 "ddgst": ${ddgst:-false} 00:14:45.274 }, 00:14:45.274 "method": "bdev_nvme_attach_controller" 00:14:45.274 } 00:14:45.274 EOF 00:14:45.274 )") 00:14:45.274 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:14:45.274 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:14:45.274 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:14:45.274 10:08:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:45.274 "params": { 00:14:45.274 "name": "Nvme1", 00:14:45.274 "trtype": "tcp", 00:14:45.274 "traddr": "10.0.0.3", 00:14:45.274 "adrfam": "ipv4", 00:14:45.274 "trsvcid": "4420", 00:14:45.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:45.274 "hdgst": false, 00:14:45.274 "ddgst": false 00:14:45.274 }, 00:14:45.274 "method": "bdev_nvme_attach_controller" 00:14:45.274 }' 00:14:45.274 [2024-11-19 10:08:58.943429] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:14:45.274 [2024-11-19 10:08:58.943515] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71022 ] 00:14:45.274 [2024-11-19 10:08:59.095712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:45.532 [2024-11-19 10:08:59.175619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.532 [2024-11-19 10:08:59.175760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.532 [2024-11-19 10:08:59.175766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.532 [2024-11-19 10:08:59.189927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:45.532 I/O targets: 00:14:45.532 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:45.532 00:14:45.532 00:14:45.532 CUnit - A unit testing framework for C - Version 2.1-3 00:14:45.532 http://cunit.sourceforge.net/ 00:14:45.532 00:14:45.532 00:14:45.532 Suite: bdevio tests on: Nvme1n1 00:14:45.532 Test: blockdev write read block ...passed 00:14:45.532 Test: blockdev write zeroes read block ...passed 00:14:45.532 Test: blockdev write zeroes read no split ...passed 00:14:45.791 Test: blockdev write zeroes read split ...passed 00:14:45.791 Test: blockdev write zeroes read split partial ...passed 00:14:45.791 Test: blockdev reset ...[2024-11-19 10:08:59.439589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:45.791 [2024-11-19 10:08:59.439714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1883310 (9): Bad file descriptor 00:14:45.791 [2024-11-19 10:08:59.453489] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:45.791 passed 00:14:45.791 Test: blockdev write read 8 blocks ...passed 00:14:45.791 Test: blockdev write read size > 128k ...passed 00:14:45.791 Test: blockdev write read invalid size ...passed 00:14:45.791 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:45.791 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:45.791 Test: blockdev write read max offset ...passed 00:14:45.791 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:45.791 Test: blockdev writev readv 8 blocks ...passed 00:14:45.791 Test: blockdev writev readv 30 x 1block ...passed 00:14:45.791 Test: blockdev writev readv block ...passed 00:14:45.791 Test: blockdev writev readv size > 128k ...passed 00:14:45.791 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:45.791 Test: blockdev comparev and writev ...[2024-11-19 10:08:59.463534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:45.791 [2024-11-19 10:08:59.463591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:45.791 [2024-11-19 10:08:59.463611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:45.791 [2024-11-19 10:08:59.463622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:45.791 [2024-11-19 10:08:59.463995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:45.791 [2024-11-19 10:08:59.464023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:45.791 [2024-11-19 10:08:59.464042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:45.791 [2024-11-19 10:08:59.464052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:45.791 [2024-11-19 10:08:59.464424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:45.791 [2024-11-19 10:08:59.464457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:45.791 [2024-11-19 10:08:59.464546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:45.791 [2024-11-19 10:08:59.464558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:45.792 [2024-11-19 10:08:59.464987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:45.792 [2024-11-19 10:08:59.465019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:45.792 [2024-11-19 10:08:59.465037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:45.792 [2024-11-19 10:08:59.465047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:45.792 passed 00:14:45.792 Test: blockdev nvme passthru rw ...passed 00:14:45.792 Test: blockdev nvme passthru vendor specific ...[2024-11-19 10:08:59.466320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:45.792 [2024-11-19 10:08:59.466449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:45.792 [2024-11-19 10:08:59.466721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:45.792 [2024-11-19 10:08:59.466754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:45.792 [2024-11-19 10:08:59.466970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:45.792 [2024-11-19 10:08:59.467002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:45.792 [2024-11-19 10:08:59.467312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:45.792 [2024-11-19 10:08:59.467343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:45.792 passed 00:14:45.792 Test: blockdev nvme admin passthru ...passed 00:14:45.792 Test: blockdev copy ...passed 00:14:45.792 00:14:45.792 Run Summary: Type Total Ran Passed Failed Inactive 00:14:45.792 suites 1 1 n/a 0 0 00:14:45.792 tests 23 23 23 0 0 00:14:45.792 asserts 152 152 152 0 n/a 00:14:45.792 00:14:45.792 Elapsed time = 0.164 seconds 00:14:46.051 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:46.051 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.051 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:46.051 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.051 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:46.051 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:46.051 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:46.051 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:14:46.051 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:46.051 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:14:46.051 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:46.051 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:46.051 rmmod nvme_tcp 00:14:46.051 rmmod nvme_fabrics 00:14:46.051 rmmod nvme_keyring 00:14:46.051 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:46.051 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:14:46.051 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:14:46.051 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 70986 ']' 00:14:46.051 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 70986 00:14:46.051 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 70986 ']' 00:14:46.051 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 70986 00:14:46.051 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:14:46.051 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.051 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70986 00:14:46.311 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:14:46.311 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:14:46.311 killing process with pid 70986 00:14:46.311 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70986' 00:14:46.311 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 70986 00:14:46.311 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 70986 00:14:46.583 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:46.583 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:46.583 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:46.583 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:14:46.583 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:14:46.583 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:46.583 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:14:46.583 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:46.583 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:46.583 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:46.583 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:46.583 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:46.583 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:46.583 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:46.583 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:46.584 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:46.584 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:46.584 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:46.877 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:46.877 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:46.877 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:46.877 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:46.877 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:46.877 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.877 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:46.877 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.877 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:14:46.877 ************************************ 00:14:46.877 END TEST nvmf_bdevio_no_huge 00:14:46.877 ************************************ 00:14:46.877 00:14:46.877 real 0m3.558s 00:14:46.877 user 0m10.939s 00:14:46.877 sys 0m1.425s 00:14:46.877 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:46.877 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:46.877 10:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:46.877 10:09:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:46.877 10:09:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:46.877 10:09:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:46.877 ************************************ 00:14:46.877 START TEST nvmf_tls 00:14:46.877 ************************************ 00:14:46.877 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:46.877 * Looking for test storage... 00:14:46.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:46.877 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:46.877 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:46.877 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:14:47.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:47.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:47.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:47.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:47.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:14:47.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:14:47.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:14:47.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:14:47.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:14:47.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:14:47.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:14:47.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:47.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:14:47.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:14:47.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:47.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:47.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:14:47.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:14:47.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:47.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:14:47.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:14:47.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:14:47.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:47.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.138 --rc genhtml_branch_coverage=1 00:14:47.138 --rc genhtml_function_coverage=1 00:14:47.138 --rc genhtml_legend=1 00:14:47.138 --rc geninfo_all_blocks=1 00:14:47.138 --rc geninfo_unexecuted_blocks=1 00:14:47.138 00:14:47.138 ' 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:47.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.138 --rc genhtml_branch_coverage=1 00:14:47.138 --rc genhtml_function_coverage=1 00:14:47.138 --rc genhtml_legend=1 00:14:47.138 --rc geninfo_all_blocks=1 00:14:47.138 --rc geninfo_unexecuted_blocks=1 00:14:47.138 00:14:47.138 ' 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:47.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.138 --rc genhtml_branch_coverage=1 00:14:47.138 --rc genhtml_function_coverage=1 00:14:47.138 --rc genhtml_legend=1 00:14:47.138 --rc geninfo_all_blocks=1 00:14:47.138 --rc geninfo_unexecuted_blocks=1 00:14:47.138 00:14:47.138 ' 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:47.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.138 --rc genhtml_branch_coverage=1 00:14:47.138 --rc genhtml_function_coverage=1 00:14:47.138 --rc genhtml_legend=1 00:14:47.138 --rc geninfo_all_blocks=1 00:14:47.138 --rc geninfo_unexecuted_blocks=1 00:14:47.138 00:14:47.138 ' 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:47.138 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:47.138 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:47.139 Cannot find device "nvmf_init_br" 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:47.139 Cannot find device "nvmf_init_br2" 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:47.139 Cannot find device "nvmf_tgt_br" 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:47.139 Cannot find device "nvmf_tgt_br2" 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:47.139 Cannot find device "nvmf_init_br" 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:47.139 Cannot find device "nvmf_init_br2" 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:47.139 Cannot find device "nvmf_tgt_br" 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:47.139 Cannot find device "nvmf_tgt_br2" 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:47.139 Cannot find device "nvmf_br" 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:47.139 Cannot find device "nvmf_init_if" 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:47.139 Cannot find device "nvmf_init_if2" 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:47.139 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:47.139 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:47.139 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:47.398 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:47.398 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:14:47.398 00:14:47.398 --- 10.0.0.3 ping statistics --- 00:14:47.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.398 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:47.398 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:47.398 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:14:47.398 00:14:47.398 --- 10.0.0.4 ping statistics --- 00:14:47.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.398 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:47.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:47.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:14:47.398 00:14:47.398 --- 10.0.0.1 ping statistics --- 00:14:47.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.398 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:47.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:47.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:14:47.398 00:14:47.398 --- 10.0.0.2 ping statistics --- 00:14:47.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.398 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:47.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71254 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71254 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71254 ']' 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.398 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.399 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:47.399 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.399 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.399 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:47.657 [2024-11-19 10:09:01.325061] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:14:47.657 [2024-11-19 10:09:01.325170] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.657 [2024-11-19 10:09:01.476500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.657 [2024-11-19 10:09:01.543287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.657 [2024-11-19 10:09:01.543362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.657 [2024-11-19 10:09:01.543376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.657 [2024-11-19 10:09:01.543387] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.657 [2024-11-19 10:09:01.543396] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.657 [2024-11-19 10:09:01.543848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.916 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.917 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:47.917 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:47.917 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:47.917 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:47.917 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.917 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:14:47.917 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:48.177 true 00:14:48.177 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:14:48.177 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:48.436 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:14:48.436 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:14:48.436 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:48.694 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:48.694 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:14:48.954 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:14:48.954 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:14:48.954 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:49.213 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:49.213 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:14:49.472 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:14:49.472 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:14:49.473 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:49.473 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:14:50.040 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:14:50.040 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:14:50.040 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:50.040 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:50.040 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:14:50.608 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:14:50.608 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:14:50.608 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:50.868 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:50.868 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:14:51.128 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:14:51.128 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:14:51.128 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:51.128 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:51.128 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:51.128 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:51.128 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:14:51.128 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:51.128 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:51.128 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:51.128 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:51.128 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:51.128 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:51.128 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:51.128 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:14:51.128 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:51.128 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:51.129 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:51.129 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:51.129 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.MzioGRP3p9 00:14:51.129 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:14:51.129 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.gcAwbKJjTu 00:14:51.129 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:51.129 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:51.129 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.MzioGRP3p9 00:14:51.129 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.gcAwbKJjTu 00:14:51.129 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:51.388 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:51.648 [2024-11-19 10:09:05.461604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:51.648 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.MzioGRP3p9 00:14:51.648 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.MzioGRP3p9 00:14:51.648 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:52.217 [2024-11-19 10:09:05.816403] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.217 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:52.476 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:52.735 [2024-11-19 10:09:06.376585] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:52.735 [2024-11-19 10:09:06.376994] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:52.735 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:52.993 malloc0 00:14:52.994 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:53.253 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.MzioGRP3p9 00:14:53.515 10:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:53.774 10:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.MzioGRP3p9 00:15:03.754 Initializing NVMe Controllers 00:15:03.754 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:03.754 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:03.754 Initialization complete. Launching workers. 00:15:03.754 ======================================================== 00:15:03.754 Latency(us) 00:15:03.754 Device Information : IOPS MiB/s Average min max 00:15:03.754 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8515.96 33.27 7517.09 2305.83 13374.73 00:15:03.754 ======================================================== 00:15:03.754 Total : 8515.96 33.27 7517.09 2305.83 13374.73 00:15:03.754 00:15:03.754 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MzioGRP3p9 00:15:03.754 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:03.754 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:03.754 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:03.754 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MzioGRP3p9 00:15:03.754 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:03.754 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71485 00:15:03.754 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:03.754 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:03.754 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71485 /var/tmp/bdevperf.sock 00:15:03.754 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71485 ']' 00:15:03.754 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:03.754 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:04.013 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:04.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:04.013 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:04.013 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:04.013 [2024-11-19 10:09:17.692291] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:15:04.013 [2024-11-19 10:09:17.692406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71485 ] 00:15:04.013 [2024-11-19 10:09:17.837020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.013 [2024-11-19 10:09:17.893629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:04.272 [2024-11-19 10:09:17.948049] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:04.272 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:04.272 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:04.272 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MzioGRP3p9 00:15:04.530 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:04.789 [2024-11-19 10:09:18.617195] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:05.048 TLSTESTn1 00:15:05.048 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:05.048 Running I/O for 10 seconds... 00:15:07.360 3883.00 IOPS, 15.17 MiB/s [2024-11-19T10:09:22.186Z] 3909.50 IOPS, 15.27 MiB/s [2024-11-19T10:09:23.121Z] 3929.33 IOPS, 15.35 MiB/s [2024-11-19T10:09:24.069Z] 3964.00 IOPS, 15.48 MiB/s [2024-11-19T10:09:25.008Z] 3973.00 IOPS, 15.52 MiB/s [2024-11-19T10:09:25.945Z] 3975.00 IOPS, 15.53 MiB/s [2024-11-19T10:09:26.879Z] 3975.00 IOPS, 15.53 MiB/s [2024-11-19T10:09:28.255Z] 3976.25 IOPS, 15.53 MiB/s [2024-11-19T10:09:29.192Z] 3984.78 IOPS, 15.57 MiB/s [2024-11-19T10:09:29.192Z] 3986.60 IOPS, 15.57 MiB/s 00:15:15.303 Latency(us) 00:15:15.303 [2024-11-19T10:09:29.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.303 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:15.303 Verification LBA range: start 0x0 length 0x2000 00:15:15.303 TLSTESTn1 : 10.02 3991.72 15.59 0.00 0.00 32004.25 7089.80 22878.02 00:15:15.303 [2024-11-19T10:09:29.192Z] =================================================================================================================== 00:15:15.303 [2024-11-19T10:09:29.192Z] Total : 3991.72 15.59 0.00 0.00 32004.25 7089.80 22878.02 00:15:15.303 { 00:15:15.303 "results": [ 00:15:15.303 { 00:15:15.303 "job": "TLSTESTn1", 00:15:15.303 "core_mask": "0x4", 00:15:15.303 "workload": "verify", 00:15:15.303 "status": "finished", 00:15:15.303 "verify_range": { 00:15:15.303 "start": 0, 00:15:15.303 "length": 8192 00:15:15.303 }, 00:15:15.303 "queue_depth": 128, 00:15:15.303 "io_size": 4096, 00:15:15.303 "runtime": 10.018729, 00:15:15.303 "iops": 3991.723900307115, 00:15:15.303 "mibps": 15.592671485574668, 00:15:15.303 "io_failed": 0, 00:15:15.303 "io_timeout": 0, 00:15:15.303 "avg_latency_us": 32004.251939842514, 00:15:15.303 "min_latency_us": 7089.8036363636365, 00:15:15.303 "max_latency_us": 22878.02181818182 00:15:15.303 } 00:15:15.303 ], 00:15:15.303 "core_count": 1 00:15:15.303 } 00:15:15.303 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:15.303 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71485 00:15:15.303 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71485 ']' 00:15:15.303 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71485 00:15:15.303 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:15.303 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:15.304 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71485 00:15:15.304 killing process with pid 71485 00:15:15.304 Received shutdown signal, test time was about 10.000000 seconds 00:15:15.304 00:15:15.304 Latency(us) 00:15:15.304 [2024-11-19T10:09:29.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.304 [2024-11-19T10:09:29.193Z] =================================================================================================================== 00:15:15.304 [2024-11-19T10:09:29.193Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:15.304 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:15.304 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:15.304 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71485' 00:15:15.304 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71485 00:15:15.304 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71485 00:15:15.304 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gcAwbKJjTu 00:15:15.304 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:15.304 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gcAwbKJjTu 00:15:15.304 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:15.304 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.304 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:15.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:15.304 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.304 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gcAwbKJjTu 00:15:15.304 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:15.304 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:15.304 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:15.304 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gcAwbKJjTu 00:15:15.304 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:15.304 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71618 00:15:15.304 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:15.304 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71618 /var/tmp/bdevperf.sock 00:15:15.304 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:15.304 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71618 ']' 00:15:15.304 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:15.304 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.304 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:15.304 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.304 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.568 [2024-11-19 10:09:29.202675] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:15:15.568 [2024-11-19 10:09:29.202833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71618 ] 00:15:15.568 [2024-11-19 10:09:29.363750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.568 [2024-11-19 10:09:29.421143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.830 [2024-11-19 10:09:29.478504] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:16.398 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.398 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:16.398 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gcAwbKJjTu 00:15:16.658 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:16.917 [2024-11-19 10:09:30.698465] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:16.917 [2024-11-19 10:09:30.709030] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:16.917 [2024-11-19 10:09:30.709583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d64fb0 (107): Transport endpoint is not connected 00:15:16.917 [2024-11-19 10:09:30.710564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d64fb0 (9): Bad file descriptor 00:15:16.917 [2024-11-19 10:09:30.711561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:15:16.917 [2024-11-19 10:09:30.711589] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:15:16.917 [2024-11-19 10:09:30.711603] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:15:16.917 [2024-11-19 10:09:30.711621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:15:16.917 request: 00:15:16.917 { 00:15:16.917 "name": "TLSTEST", 00:15:16.917 "trtype": "tcp", 00:15:16.917 "traddr": "10.0.0.3", 00:15:16.917 "adrfam": "ipv4", 00:15:16.917 "trsvcid": "4420", 00:15:16.917 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:16.917 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:16.917 "prchk_reftag": false, 00:15:16.917 "prchk_guard": false, 00:15:16.917 "hdgst": false, 00:15:16.917 "ddgst": false, 00:15:16.917 "psk": "key0", 00:15:16.917 "allow_unrecognized_csi": false, 00:15:16.917 "method": "bdev_nvme_attach_controller", 00:15:16.917 "req_id": 1 00:15:16.917 } 00:15:16.917 Got JSON-RPC error response 00:15:16.917 response: 00:15:16.917 { 00:15:16.917 "code": -5, 00:15:16.917 "message": "Input/output error" 00:15:16.917 } 00:15:16.917 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71618 00:15:16.917 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71618 ']' 00:15:16.917 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71618 00:15:16.917 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:16.917 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.917 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71618 00:15:16.917 killing process with pid 71618 00:15:16.917 Received shutdown signal, test time was about 10.000000 seconds 00:15:16.917 00:15:16.917 Latency(us) 00:15:16.917 [2024-11-19T10:09:30.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.917 [2024-11-19T10:09:30.806Z] =================================================================================================================== 00:15:16.917 [2024-11-19T10:09:30.806Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:16.917 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:16.917 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:16.917 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71618' 00:15:16.917 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71618 00:15:16.917 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71618 00:15:17.176 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:17.176 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:17.176 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:17.176 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:17.176 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:17.176 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MzioGRP3p9 00:15:17.176 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:17.176 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MzioGRP3p9 00:15:17.176 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:17.176 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.176 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:17.176 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.176 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MzioGRP3p9 00:15:17.176 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:17.176 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:17.176 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:15:17.176 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MzioGRP3p9 00:15:17.176 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:17.176 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71646 00:15:17.177 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:17.177 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:17.177 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71646 /var/tmp/bdevperf.sock 00:15:17.177 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71646 ']' 00:15:17.177 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:17.177 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.177 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:17.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:17.177 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.177 10:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:17.177 [2024-11-19 10:09:31.028489] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:15:17.177 [2024-11-19 10:09:31.028604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71646 ] 00:15:17.435 [2024-11-19 10:09:31.171996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.435 [2024-11-19 10:09:31.234408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.435 [2024-11-19 10:09:31.288922] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:17.694 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:17.694 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:17.694 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MzioGRP3p9 00:15:17.953 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:15:18.212 [2024-11-19 10:09:31.853530] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:18.212 [2024-11-19 10:09:31.859086] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:18.212 [2024-11-19 10:09:31.859157] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:18.212 [2024-11-19 10:09:31.859255] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:18.212 [2024-11-19 10:09:31.859686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab3fb0 (107): Transport endpoint is not connected 00:15:18.212 [2024-11-19 10:09:31.860668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab3fb0 (9): Bad file descriptor 00:15:18.212 [2024-11-19 10:09:31.861663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:15:18.212 [2024-11-19 10:09:31.861707] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:15:18.212 [2024-11-19 10:09:31.861718] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:15:18.212 [2024-11-19 10:09:31.861736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:15:18.212 request: 00:15:18.212 { 00:15:18.212 "name": "TLSTEST", 00:15:18.212 "trtype": "tcp", 00:15:18.212 "traddr": "10.0.0.3", 00:15:18.212 "adrfam": "ipv4", 00:15:18.212 "trsvcid": "4420", 00:15:18.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.212 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:18.212 "prchk_reftag": false, 00:15:18.212 "prchk_guard": false, 00:15:18.212 "hdgst": false, 00:15:18.212 "ddgst": false, 00:15:18.212 "psk": "key0", 00:15:18.212 "allow_unrecognized_csi": false, 00:15:18.212 "method": "bdev_nvme_attach_controller", 00:15:18.212 "req_id": 1 00:15:18.212 } 00:15:18.212 Got JSON-RPC error response 00:15:18.212 response: 00:15:18.212 { 00:15:18.212 "code": -5, 00:15:18.212 "message": "Input/output error" 00:15:18.212 } 00:15:18.212 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71646 00:15:18.212 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71646 ']' 00:15:18.212 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71646 00:15:18.212 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:18.212 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:18.212 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71646 00:15:18.212 killing process with pid 71646 00:15:18.212 Received shutdown signal, test time was about 10.000000 seconds 00:15:18.212 00:15:18.212 Latency(us) 00:15:18.212 [2024-11-19T10:09:32.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.212 [2024-11-19T10:09:32.101Z] =================================================================================================================== 00:15:18.212 [2024-11-19T10:09:32.101Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:18.212 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:18.212 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:18.212 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71646' 00:15:18.212 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71646 00:15:18.212 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71646 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MzioGRP3p9 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MzioGRP3p9 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MzioGRP3p9 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MzioGRP3p9 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71667 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71667 /var/tmp/bdevperf.sock 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71667 ']' 00:15:18.471 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:18.472 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.472 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:18.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:18.472 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.472 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:18.472 [2024-11-19 10:09:32.170003] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:15:18.472 [2024-11-19 10:09:32.170131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71667 ] 00:15:18.472 [2024-11-19 10:09:32.312152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.730 [2024-11-19 10:09:32.373619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.730 [2024-11-19 10:09:32.427560] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:18.730 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.730 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:18.730 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MzioGRP3p9 00:15:18.990 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:19.249 [2024-11-19 10:09:33.104262] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:19.249 [2024-11-19 10:09:33.109511] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:19.249 [2024-11-19 10:09:33.109559] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:19.249 [2024-11-19 10:09:33.109609] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:19.249 [2024-11-19 10:09:33.110232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2006fb0 (107): Transport endpoint is not connected 00:15:19.249 [2024-11-19 10:09:33.111216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2006fb0 (9): Bad file descriptor 00:15:19.249 [2024-11-19 10:09:33.112211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:15:19.249 [2024-11-19 10:09:33.112247] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:15:19.249 [2024-11-19 10:09:33.112259] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:15:19.249 [2024-11-19 10:09:33.112277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:15:19.249 request: 00:15:19.249 { 00:15:19.249 "name": "TLSTEST", 00:15:19.249 "trtype": "tcp", 00:15:19.249 "traddr": "10.0.0.3", 00:15:19.249 "adrfam": "ipv4", 00:15:19.249 "trsvcid": "4420", 00:15:19.249 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:19.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:19.250 "prchk_reftag": false, 00:15:19.250 "prchk_guard": false, 00:15:19.250 "hdgst": false, 00:15:19.250 "ddgst": false, 00:15:19.250 "psk": "key0", 00:15:19.250 "allow_unrecognized_csi": false, 00:15:19.250 "method": "bdev_nvme_attach_controller", 00:15:19.250 "req_id": 1 00:15:19.250 } 00:15:19.250 Got JSON-RPC error response 00:15:19.250 response: 00:15:19.250 { 00:15:19.250 "code": -5, 00:15:19.250 "message": "Input/output error" 00:15:19.250 } 00:15:19.250 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71667 00:15:19.250 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71667 ']' 00:15:19.250 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71667 00:15:19.250 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:19.509 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:19.509 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71667 00:15:19.509 killing process with pid 71667 00:15:19.509 Received shutdown signal, test time was about 10.000000 seconds 00:15:19.509 00:15:19.509 Latency(us) 00:15:19.509 [2024-11-19T10:09:33.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.509 [2024-11-19T10:09:33.398Z] =================================================================================================================== 00:15:19.509 [2024-11-19T10:09:33.398Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71667' 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71667 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71667 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71694 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71694 /var/tmp/bdevperf.sock 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71694 ']' 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:19.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:19.510 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.769 [2024-11-19 10:09:33.446689] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:15:19.769 [2024-11-19 10:09:33.446815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71694 ] 00:15:19.769 [2024-11-19 10:09:33.595350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.028 [2024-11-19 10:09:33.666960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.028 [2024-11-19 10:09:33.724284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:20.028 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:20.028 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:20.028 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:15:20.287 [2024-11-19 10:09:34.050061] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:15:20.287 [2024-11-19 10:09:34.050139] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:20.287 request: 00:15:20.287 { 00:15:20.287 "name": "key0", 00:15:20.287 "path": "", 00:15:20.287 "method": "keyring_file_add_key", 00:15:20.287 "req_id": 1 00:15:20.287 } 00:15:20.287 Got JSON-RPC error response 00:15:20.287 response: 00:15:20.287 { 00:15:20.287 "code": -1, 00:15:20.287 "message": "Operation not permitted" 00:15:20.287 } 00:15:20.287 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:20.546 [2024-11-19 10:09:34.362269] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:20.546 [2024-11-19 10:09:34.362352] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:15:20.546 request: 00:15:20.546 { 00:15:20.546 "name": "TLSTEST", 00:15:20.546 "trtype": "tcp", 00:15:20.547 "traddr": "10.0.0.3", 00:15:20.547 "adrfam": "ipv4", 00:15:20.547 "trsvcid": "4420", 00:15:20.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.547 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:20.547 "prchk_reftag": false, 00:15:20.547 "prchk_guard": false, 00:15:20.547 "hdgst": false, 00:15:20.547 "ddgst": false, 00:15:20.547 "psk": "key0", 00:15:20.547 "allow_unrecognized_csi": false, 00:15:20.547 "method": "bdev_nvme_attach_controller", 00:15:20.547 "req_id": 1 00:15:20.547 } 00:15:20.547 Got JSON-RPC error response 00:15:20.547 response: 00:15:20.547 { 00:15:20.547 "code": -126, 00:15:20.547 "message": "Required key not available" 00:15:20.547 } 00:15:20.547 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71694 00:15:20.547 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71694 ']' 00:15:20.547 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71694 00:15:20.547 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:20.547 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:20.547 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71694 00:15:20.547 killing process with pid 71694 00:15:20.547 Received shutdown signal, test time was about 10.000000 seconds 00:15:20.547 00:15:20.547 Latency(us) 00:15:20.547 [2024-11-19T10:09:34.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.547 [2024-11-19T10:09:34.436Z] =================================================================================================================== 00:15:20.547 [2024-11-19T10:09:34.436Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:20.547 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:20.547 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:20.547 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71694' 00:15:20.547 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71694 00:15:20.547 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71694 00:15:20.806 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:20.806 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:20.806 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:20.806 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:20.806 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:20.806 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71254 00:15:20.806 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71254 ']' 00:15:20.806 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71254 00:15:20.806 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:20.806 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:20.806 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71254 00:15:20.806 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:20.806 killing process with pid 71254 00:15:20.806 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:20.806 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71254' 00:15:20.806 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71254 00:15:20.806 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71254 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.ZeUWax8M3X 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.ZeUWax8M3X 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71729 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71729 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71729 ']' 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.066 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:21.325 [2024-11-19 10:09:34.974222] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:15:21.325 [2024-11-19 10:09:34.974337] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.325 [2024-11-19 10:09:35.120612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.325 [2024-11-19 10:09:35.180707] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.325 [2024-11-19 10:09:35.180773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.325 [2024-11-19 10:09:35.180784] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:21.325 [2024-11-19 10:09:35.180793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:21.325 [2024-11-19 10:09:35.180800] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:21.325 [2024-11-19 10:09:35.181202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.599 [2024-11-19 10:09:35.236110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:21.599 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:21.599 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:21.599 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:21.599 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:21.599 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:21.599 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.599 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.ZeUWax8M3X 00:15:21.599 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZeUWax8M3X 00:15:21.599 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:21.858 [2024-11-19 10:09:35.628685] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.858 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:22.116 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:22.375 [2024-11-19 10:09:36.196795] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:22.375 [2024-11-19 10:09:36.197074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:22.375 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:22.633 malloc0 00:15:22.892 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:23.151 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZeUWax8M3X 00:15:23.410 10:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:23.668 10:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZeUWax8M3X 00:15:23.668 10:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:23.668 10:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:23.668 10:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:23.668 10:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZeUWax8M3X 00:15:23.668 10:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:23.668 10:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71784 00:15:23.668 10:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:23.668 10:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:23.668 10:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71784 /var/tmp/bdevperf.sock 00:15:23.668 10:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71784 ']' 00:15:23.668 10:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:23.668 10:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:23.668 10:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:23.668 10:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.668 10:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:23.668 [2024-11-19 10:09:37.433315] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:15:23.668 [2024-11-19 10:09:37.433419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71784 ] 00:15:23.927 [2024-11-19 10:09:37.583833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.927 [2024-11-19 10:09:37.657934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.927 [2024-11-19 10:09:37.717293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:24.864 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.864 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:24.864 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZeUWax8M3X 00:15:24.864 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:25.123 [2024-11-19 10:09:38.938328] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:25.383 TLSTESTn1 00:15:25.383 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:25.383 Running I/O for 10 seconds... 00:15:27.698 3900.00 IOPS, 15.23 MiB/s [2024-11-19T10:09:42.156Z] 3969.00 IOPS, 15.50 MiB/s [2024-11-19T10:09:43.544Z] 3980.00 IOPS, 15.55 MiB/s [2024-11-19T10:09:44.482Z] 3998.25 IOPS, 15.62 MiB/s [2024-11-19T10:09:45.420Z] 3998.20 IOPS, 15.62 MiB/s [2024-11-19T10:09:46.357Z] 4012.67 IOPS, 15.67 MiB/s [2024-11-19T10:09:47.341Z] 4024.29 IOPS, 15.72 MiB/s [2024-11-19T10:09:48.278Z] 4034.38 IOPS, 15.76 MiB/s [2024-11-19T10:09:49.214Z] 4039.56 IOPS, 15.78 MiB/s [2024-11-19T10:09:49.214Z] 4041.50 IOPS, 15.79 MiB/s 00:15:35.325 Latency(us) 00:15:35.325 [2024-11-19T10:09:49.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.325 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:35.325 Verification LBA range: start 0x0 length 0x2000 00:15:35.325 TLSTESTn1 : 10.02 4046.37 15.81 0.00 0.00 31571.62 6613.18 36461.85 00:15:35.325 [2024-11-19T10:09:49.214Z] =================================================================================================================== 00:15:35.325 [2024-11-19T10:09:49.214Z] Total : 4046.37 15.81 0.00 0.00 31571.62 6613.18 36461.85 00:15:35.325 { 00:15:35.325 "results": [ 00:15:35.325 { 00:15:35.325 "job": "TLSTESTn1", 00:15:35.325 "core_mask": "0x4", 00:15:35.325 "workload": "verify", 00:15:35.325 "status": "finished", 00:15:35.325 "verify_range": { 00:15:35.325 "start": 0, 00:15:35.325 "length": 8192 00:15:35.325 }, 00:15:35.325 "queue_depth": 128, 00:15:35.325 "io_size": 4096, 00:15:35.325 "runtime": 10.019354, 00:15:35.325 "iops": 4046.368658099115, 00:15:35.325 "mibps": 15.806127570699667, 00:15:35.325 "io_failed": 0, 00:15:35.325 "io_timeout": 0, 00:15:35.325 "avg_latency_us": 31571.622510617497, 00:15:35.325 "min_latency_us": 6613.178181818182, 00:15:35.325 "max_latency_us": 36461.847272727275 00:15:35.325 } 00:15:35.325 ], 00:15:35.325 "core_count": 1 00:15:35.325 } 00:15:35.325 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:35.325 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71784 00:15:35.325 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71784 ']' 00:15:35.325 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71784 00:15:35.325 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:35.325 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:35.325 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71784 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:35.584 killing process with pid 71784 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71784' 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71784 00:15:35.584 Received shutdown signal, test time was about 10.000000 seconds 00:15:35.584 00:15:35.584 Latency(us) 00:15:35.584 [2024-11-19T10:09:49.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.584 [2024-11-19T10:09:49.473Z] =================================================================================================================== 00:15:35.584 [2024-11-19T10:09:49.473Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71784 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.ZeUWax8M3X 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZeUWax8M3X 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZeUWax8M3X 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZeUWax8M3X 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZeUWax8M3X 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71915 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71915 /var/tmp/bdevperf.sock 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71915 ']' 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.584 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:35.843 [2024-11-19 10:09:49.487024] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:15:35.843 [2024-11-19 10:09:49.487137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71915 ] 00:15:35.843 [2024-11-19 10:09:49.635275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.843 [2024-11-19 10:09:49.696957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.101 [2024-11-19 10:09:49.751354] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:36.102 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.102 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:36.102 10:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZeUWax8M3X 00:15:36.361 [2024-11-19 10:09:50.100066] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ZeUWax8M3X': 0100666 00:15:36.361 [2024-11-19 10:09:50.100150] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:36.361 request: 00:15:36.361 { 00:15:36.361 "name": "key0", 00:15:36.361 "path": "/tmp/tmp.ZeUWax8M3X", 00:15:36.361 "method": "keyring_file_add_key", 00:15:36.361 "req_id": 1 00:15:36.361 } 00:15:36.361 Got JSON-RPC error response 00:15:36.361 response: 00:15:36.361 { 00:15:36.361 "code": -1, 00:15:36.361 "message": "Operation not permitted" 00:15:36.361 } 00:15:36.361 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:36.633 [2024-11-19 10:09:50.400775] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:36.633 [2024-11-19 10:09:50.400864] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:15:36.633 request: 00:15:36.633 { 00:15:36.633 "name": "TLSTEST", 00:15:36.633 "trtype": "tcp", 00:15:36.633 "traddr": "10.0.0.3", 00:15:36.633 "adrfam": "ipv4", 00:15:36.633 "trsvcid": "4420", 00:15:36.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:36.633 "prchk_reftag": false, 00:15:36.633 "prchk_guard": false, 00:15:36.633 "hdgst": false, 00:15:36.633 "ddgst": false, 00:15:36.633 "psk": "key0", 00:15:36.633 "allow_unrecognized_csi": false, 00:15:36.633 "method": "bdev_nvme_attach_controller", 00:15:36.633 "req_id": 1 00:15:36.633 } 00:15:36.633 Got JSON-RPC error response 00:15:36.633 response: 00:15:36.633 { 00:15:36.633 "code": -126, 00:15:36.633 "message": "Required key not available" 00:15:36.633 } 00:15:36.633 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71915 00:15:36.633 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71915 ']' 00:15:36.633 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71915 00:15:36.633 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:36.633 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.633 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71915 00:15:36.633 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:36.633 killing process with pid 71915 00:15:36.633 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:36.633 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71915' 00:15:36.633 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71915 00:15:36.633 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71915 00:15:36.633 Received shutdown signal, test time was about 10.000000 seconds 00:15:36.633 00:15:36.633 Latency(us) 00:15:36.633 [2024-11-19T10:09:50.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.633 [2024-11-19T10:09:50.522Z] =================================================================================================================== 00:15:36.633 [2024-11-19T10:09:50.523Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:36.926 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:36.926 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:36.926 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:36.926 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:36.926 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:36.926 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71729 00:15:36.926 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71729 ']' 00:15:36.926 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71729 00:15:36.926 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:36.926 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.926 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71729 00:15:36.926 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:36.926 killing process with pid 71729 00:15:36.926 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:36.926 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71729' 00:15:36.926 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71729 00:15:36.926 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71729 00:15:37.185 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:15:37.185 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:37.185 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:37.185 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:37.185 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71951 00:15:37.185 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71951 00:15:37.185 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71951 ']' 00:15:37.185 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:37.185 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.185 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.185 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.185 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.185 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:37.185 [2024-11-19 10:09:50.936210] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:15:37.185 [2024-11-19 10:09:50.936314] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.443 [2024-11-19 10:09:51.087148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.443 [2024-11-19 10:09:51.146633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.443 [2024-11-19 10:09:51.146695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.443 [2024-11-19 10:09:51.146707] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.443 [2024-11-19 10:09:51.146715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.443 [2024-11-19 10:09:51.146722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.443 [2024-11-19 10:09:51.147116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.443 [2024-11-19 10:09:51.201362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:38.379 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:38.379 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:38.379 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:38.379 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:38.379 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:38.379 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.379 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.ZeUWax8M3X 00:15:38.379 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:38.379 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ZeUWax8M3X 00:15:38.379 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:15:38.379 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:38.379 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:15:38.379 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:38.379 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.ZeUWax8M3X 00:15:38.379 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZeUWax8M3X 00:15:38.379 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:38.379 [2024-11-19 10:09:52.237819] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.379 10:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:38.637 10:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:38.894 [2024-11-19 10:09:52.753947] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:38.894 [2024-11-19 10:09:52.754196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:38.894 10:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:39.153 malloc0 00:15:39.153 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:39.719 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZeUWax8M3X 00:15:39.719 [2024-11-19 10:09:53.576920] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ZeUWax8M3X': 0100666 00:15:39.719 [2024-11-19 10:09:53.576984] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:39.719 request: 00:15:39.719 { 00:15:39.719 "name": "key0", 00:15:39.719 "path": "/tmp/tmp.ZeUWax8M3X", 00:15:39.719 "method": "keyring_file_add_key", 00:15:39.719 "req_id": 1 00:15:39.719 } 00:15:39.719 Got JSON-RPC error response 00:15:39.719 response: 00:15:39.719 { 00:15:39.719 "code": -1, 00:15:39.719 "message": "Operation not permitted" 00:15:39.719 } 00:15:39.719 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:39.977 [2024-11-19 10:09:53.840999] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:15:39.977 [2024-11-19 10:09:53.841082] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:39.977 request: 00:15:39.977 { 00:15:39.977 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.977 "host": "nqn.2016-06.io.spdk:host1", 00:15:39.977 "psk": "key0", 00:15:39.977 "method": "nvmf_subsystem_add_host", 00:15:39.978 "req_id": 1 00:15:39.978 } 00:15:39.978 Got JSON-RPC error response 00:15:39.978 response: 00:15:39.978 { 00:15:39.978 "code": -32603, 00:15:39.978 "message": "Internal error" 00:15:39.978 } 00:15:39.978 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:39.978 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:39.978 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:39.978 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:39.978 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71951 00:15:39.978 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71951 ']' 00:15:39.978 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71951 00:15:39.978 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:40.236 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.236 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71951 00:15:40.236 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:40.236 killing process with pid 71951 00:15:40.237 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:40.237 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71951' 00:15:40.237 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71951 00:15:40.237 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71951 00:15:40.237 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.ZeUWax8M3X 00:15:40.237 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:15:40.237 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:40.237 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:40.237 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:40.237 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72016 00:15:40.237 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:40.237 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72016 00:15:40.237 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72016 ']' 00:15:40.237 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.237 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:40.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.237 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.237 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:40.237 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:40.494 [2024-11-19 10:09:54.173279] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:15:40.494 [2024-11-19 10:09:54.173388] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.494 [2024-11-19 10:09:54.319296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.494 [2024-11-19 10:09:54.379021] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.494 [2024-11-19 10:09:54.379076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.494 [2024-11-19 10:09:54.379088] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.494 [2024-11-19 10:09:54.379097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.494 [2024-11-19 10:09:54.379104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.494 [2024-11-19 10:09:54.379496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.753 [2024-11-19 10:09:54.433584] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:40.753 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.753 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:40.753 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:40.753 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:40.753 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:40.753 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.753 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.ZeUWax8M3X 00:15:40.753 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZeUWax8M3X 00:15:40.753 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:41.011 [2024-11-19 10:09:54.783887] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.011 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:41.270 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:41.529 [2024-11-19 10:09:55.344075] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:41.529 [2024-11-19 10:09:55.344337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:41.529 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:41.787 malloc0 00:15:41.787 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:42.045 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZeUWax8M3X 00:15:42.304 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:42.562 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72064 00:15:42.562 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:42.562 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:42.562 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72064 /var/tmp/bdevperf.sock 00:15:42.562 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72064 ']' 00:15:42.562 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:42.562 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:42.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:42.562 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:42.562 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:42.562 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:42.820 [2024-11-19 10:09:56.452668] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:15:42.820 [2024-11-19 10:09:56.452768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72064 ] 00:15:42.820 [2024-11-19 10:09:56.599830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.820 [2024-11-19 10:09:56.684830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.078 [2024-11-19 10:09:56.739354] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:43.078 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:43.078 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:43.078 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZeUWax8M3X 00:15:43.336 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:43.640 [2024-11-19 10:09:57.324409] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:43.640 TLSTESTn1 00:15:43.640 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:44.208 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:15:44.208 "subsystems": [ 00:15:44.208 { 00:15:44.208 "subsystem": "keyring", 00:15:44.208 "config": [ 00:15:44.208 { 00:15:44.208 "method": "keyring_file_add_key", 00:15:44.208 "params": { 00:15:44.208 "name": "key0", 00:15:44.208 "path": "/tmp/tmp.ZeUWax8M3X" 00:15:44.208 } 00:15:44.208 } 00:15:44.208 ] 00:15:44.208 }, 00:15:44.208 { 00:15:44.208 "subsystem": "iobuf", 00:15:44.208 "config": [ 00:15:44.208 { 00:15:44.208 "method": "iobuf_set_options", 00:15:44.208 "params": { 00:15:44.208 "small_pool_count": 8192, 00:15:44.208 "large_pool_count": 1024, 00:15:44.208 "small_bufsize": 8192, 00:15:44.208 "large_bufsize": 135168, 00:15:44.208 "enable_numa": false 00:15:44.208 } 00:15:44.208 } 00:15:44.208 ] 00:15:44.208 }, 00:15:44.208 { 00:15:44.208 "subsystem": "sock", 00:15:44.208 "config": [ 00:15:44.208 { 00:15:44.208 "method": "sock_set_default_impl", 00:15:44.208 "params": { 00:15:44.208 "impl_name": "uring" 00:15:44.208 } 00:15:44.208 }, 00:15:44.208 { 00:15:44.208 "method": "sock_impl_set_options", 00:15:44.208 "params": { 00:15:44.208 "impl_name": "ssl", 00:15:44.208 "recv_buf_size": 4096, 00:15:44.208 "send_buf_size": 4096, 00:15:44.208 "enable_recv_pipe": true, 00:15:44.208 "enable_quickack": false, 00:15:44.208 "enable_placement_id": 0, 00:15:44.208 "enable_zerocopy_send_server": true, 00:15:44.208 "enable_zerocopy_send_client": false, 00:15:44.208 "zerocopy_threshold": 0, 00:15:44.208 "tls_version": 0, 00:15:44.208 "enable_ktls": false 00:15:44.208 } 00:15:44.208 }, 00:15:44.208 { 00:15:44.208 "method": "sock_impl_set_options", 00:15:44.208 "params": { 00:15:44.208 "impl_name": "posix", 00:15:44.208 "recv_buf_size": 2097152, 00:15:44.208 "send_buf_size": 2097152, 00:15:44.208 "enable_recv_pipe": true, 00:15:44.208 "enable_quickack": false, 00:15:44.208 "enable_placement_id": 0, 00:15:44.208 "enable_zerocopy_send_server": true, 00:15:44.208 "enable_zerocopy_send_client": false, 00:15:44.208 "zerocopy_threshold": 0, 00:15:44.208 "tls_version": 0, 00:15:44.208 "enable_ktls": false 00:15:44.208 } 00:15:44.208 }, 00:15:44.208 { 00:15:44.208 "method": "sock_impl_set_options", 00:15:44.208 "params": { 00:15:44.208 "impl_name": "uring", 00:15:44.208 "recv_buf_size": 2097152, 00:15:44.208 "send_buf_size": 2097152, 00:15:44.208 "enable_recv_pipe": true, 00:15:44.208 "enable_quickack": false, 00:15:44.208 "enable_placement_id": 0, 00:15:44.208 "enable_zerocopy_send_server": false, 00:15:44.208 "enable_zerocopy_send_client": false, 00:15:44.208 "zerocopy_threshold": 0, 00:15:44.208 "tls_version": 0, 00:15:44.208 "enable_ktls": false 00:15:44.208 } 00:15:44.208 } 00:15:44.208 ] 00:15:44.208 }, 00:15:44.208 { 00:15:44.208 "subsystem": "vmd", 00:15:44.208 "config": [] 00:15:44.208 }, 00:15:44.208 { 00:15:44.208 "subsystem": "accel", 00:15:44.208 "config": [ 00:15:44.208 { 00:15:44.208 "method": "accel_set_options", 00:15:44.208 "params": { 00:15:44.208 "small_cache_size": 128, 00:15:44.208 "large_cache_size": 16, 00:15:44.208 "task_count": 2048, 00:15:44.208 "sequence_count": 2048, 00:15:44.209 "buf_count": 2048 00:15:44.209 } 00:15:44.209 } 00:15:44.209 ] 00:15:44.209 }, 00:15:44.209 { 00:15:44.209 "subsystem": "bdev", 00:15:44.209 "config": [ 00:15:44.209 { 00:15:44.209 "method": "bdev_set_options", 00:15:44.209 "params": { 00:15:44.209 "bdev_io_pool_size": 65535, 00:15:44.209 "bdev_io_cache_size": 256, 00:15:44.209 "bdev_auto_examine": true, 00:15:44.209 "iobuf_small_cache_size": 128, 00:15:44.209 "iobuf_large_cache_size": 16 00:15:44.209 } 00:15:44.209 }, 00:15:44.209 { 00:15:44.209 "method": "bdev_raid_set_options", 00:15:44.209 "params": { 00:15:44.209 "process_window_size_kb": 1024, 00:15:44.209 "process_max_bandwidth_mb_sec": 0 00:15:44.209 } 00:15:44.209 }, 00:15:44.209 { 00:15:44.209 "method": "bdev_iscsi_set_options", 00:15:44.209 "params": { 00:15:44.209 "timeout_sec": 30 00:15:44.209 } 00:15:44.209 }, 00:15:44.209 { 00:15:44.209 "method": "bdev_nvme_set_options", 00:15:44.209 "params": { 00:15:44.209 "action_on_timeout": "none", 00:15:44.209 "timeout_us": 0, 00:15:44.209 "timeout_admin_us": 0, 00:15:44.209 "keep_alive_timeout_ms": 10000, 00:15:44.209 "arbitration_burst": 0, 00:15:44.209 "low_priority_weight": 0, 00:15:44.209 "medium_priority_weight": 0, 00:15:44.209 "high_priority_weight": 0, 00:15:44.209 "nvme_adminq_poll_period_us": 10000, 00:15:44.209 "nvme_ioq_poll_period_us": 0, 00:15:44.209 "io_queue_requests": 0, 00:15:44.209 "delay_cmd_submit": true, 00:15:44.209 "transport_retry_count": 4, 00:15:44.209 "bdev_retry_count": 3, 00:15:44.209 "transport_ack_timeout": 0, 00:15:44.209 "ctrlr_loss_timeout_sec": 0, 00:15:44.209 "reconnect_delay_sec": 0, 00:15:44.209 "fast_io_fail_timeout_sec": 0, 00:15:44.209 "disable_auto_failback": false, 00:15:44.209 "generate_uuids": false, 00:15:44.209 "transport_tos": 0, 00:15:44.209 "nvme_error_stat": false, 00:15:44.209 "rdma_srq_size": 0, 00:15:44.209 "io_path_stat": false, 00:15:44.209 "allow_accel_sequence": false, 00:15:44.209 "rdma_max_cq_size": 0, 00:15:44.209 "rdma_cm_event_timeout_ms": 0, 00:15:44.209 "dhchap_digests": [ 00:15:44.209 "sha256", 00:15:44.209 "sha384", 00:15:44.209 "sha512" 00:15:44.209 ], 00:15:44.209 "dhchap_dhgroups": [ 00:15:44.209 "null", 00:15:44.209 "ffdhe2048", 00:15:44.209 "ffdhe3072", 00:15:44.209 "ffdhe4096", 00:15:44.209 "ffdhe6144", 00:15:44.209 "ffdhe8192" 00:15:44.209 ] 00:15:44.209 } 00:15:44.209 }, 00:15:44.209 { 00:15:44.209 "method": "bdev_nvme_set_hotplug", 00:15:44.209 "params": { 00:15:44.209 "period_us": 100000, 00:15:44.209 "enable": false 00:15:44.209 } 00:15:44.209 }, 00:15:44.209 { 00:15:44.209 "method": "bdev_malloc_create", 00:15:44.209 "params": { 00:15:44.209 "name": "malloc0", 00:15:44.209 "num_blocks": 8192, 00:15:44.209 "block_size": 4096, 00:15:44.209 "physical_block_size": 4096, 00:15:44.209 "uuid": "4fd95d33-a0e7-43a8-a436-a432351085b6", 00:15:44.209 "optimal_io_boundary": 0, 00:15:44.209 "md_size": 0, 00:15:44.209 "dif_type": 0, 00:15:44.209 "dif_is_head_of_md": false, 00:15:44.209 "dif_pi_format": 0 00:15:44.209 } 00:15:44.209 }, 00:15:44.209 { 00:15:44.209 "method": "bdev_wait_for_examine" 00:15:44.209 } 00:15:44.209 ] 00:15:44.209 }, 00:15:44.209 { 00:15:44.209 "subsystem": "nbd", 00:15:44.209 "config": [] 00:15:44.209 }, 00:15:44.209 { 00:15:44.209 "subsystem": "scheduler", 00:15:44.209 "config": [ 00:15:44.209 { 00:15:44.209 "method": "framework_set_scheduler", 00:15:44.209 "params": { 00:15:44.209 "name": "static" 00:15:44.209 } 00:15:44.209 } 00:15:44.209 ] 00:15:44.209 }, 00:15:44.209 { 00:15:44.209 "subsystem": "nvmf", 00:15:44.209 "config": [ 00:15:44.209 { 00:15:44.209 "method": "nvmf_set_config", 00:15:44.209 "params": { 00:15:44.209 "discovery_filter": "match_any", 00:15:44.209 "admin_cmd_passthru": { 00:15:44.209 "identify_ctrlr": false 00:15:44.209 }, 00:15:44.209 "dhchap_digests": [ 00:15:44.209 "sha256", 00:15:44.209 "sha384", 00:15:44.209 "sha512" 00:15:44.209 ], 00:15:44.209 "dhchap_dhgroups": [ 00:15:44.209 "null", 00:15:44.209 "ffdhe2048", 00:15:44.209 "ffdhe3072", 00:15:44.209 "ffdhe4096", 00:15:44.209 "ffdhe6144", 00:15:44.209 "ffdhe8192" 00:15:44.209 ] 00:15:44.209 } 00:15:44.209 }, 00:15:44.209 { 00:15:44.209 "method": "nvmf_set_max_subsystems", 00:15:44.209 "params": { 00:15:44.209 "max_subsystems": 1024 00:15:44.209 } 00:15:44.209 }, 00:15:44.209 { 00:15:44.209 "method": "nvmf_set_crdt", 00:15:44.209 "params": { 00:15:44.209 "crdt1": 0, 00:15:44.209 "crdt2": 0, 00:15:44.209 "crdt3": 0 00:15:44.209 } 00:15:44.209 }, 00:15:44.209 { 00:15:44.209 "method": "nvmf_create_transport", 00:15:44.209 "params": { 00:15:44.209 "trtype": "TCP", 00:15:44.209 "max_queue_depth": 128, 00:15:44.209 "max_io_qpairs_per_ctrlr": 127, 00:15:44.209 "in_capsule_data_size": 4096, 00:15:44.209 "max_io_size": 131072, 00:15:44.209 "io_unit_size": 131072, 00:15:44.209 "max_aq_depth": 128, 00:15:44.209 "num_shared_buffers": 511, 00:15:44.209 "buf_cache_size": 4294967295, 00:15:44.209 "dif_insert_or_strip": false, 00:15:44.209 "zcopy": false, 00:15:44.209 "c2h_success": false, 00:15:44.209 "sock_priority": 0, 00:15:44.209 "abort_timeout_sec": 1, 00:15:44.209 "ack_timeout": 0, 00:15:44.209 "data_wr_pool_size": 0 00:15:44.209 } 00:15:44.209 }, 00:15:44.209 { 00:15:44.209 "method": "nvmf_create_subsystem", 00:15:44.209 "params": { 00:15:44.209 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.209 "allow_any_host": false, 00:15:44.209 "serial_number": "SPDK00000000000001", 00:15:44.209 "model_number": "SPDK bdev Controller", 00:15:44.209 "max_namespaces": 10, 00:15:44.209 "min_cntlid": 1, 00:15:44.209 "max_cntlid": 65519, 00:15:44.209 "ana_reporting": false 00:15:44.209 } 00:15:44.209 }, 00:15:44.209 { 00:15:44.209 "method": "nvmf_subsystem_add_host", 00:15:44.209 "params": { 00:15:44.209 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.209 "host": "nqn.2016-06.io.spdk:host1", 00:15:44.209 "psk": "key0" 00:15:44.209 } 00:15:44.209 }, 00:15:44.209 { 00:15:44.209 "method": "nvmf_subsystem_add_ns", 00:15:44.209 "params": { 00:15:44.209 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.209 "namespace": { 00:15:44.209 "nsid": 1, 00:15:44.209 "bdev_name": "malloc0", 00:15:44.209 "nguid": "4FD95D33A0E743A8A436A432351085B6", 00:15:44.209 "uuid": "4fd95d33-a0e7-43a8-a436-a432351085b6", 00:15:44.209 "no_auto_visible": false 00:15:44.209 } 00:15:44.209 } 00:15:44.209 }, 00:15:44.209 { 00:15:44.209 "method": "nvmf_subsystem_add_listener", 00:15:44.209 "params": { 00:15:44.209 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.209 "listen_address": { 00:15:44.209 "trtype": "TCP", 00:15:44.209 "adrfam": "IPv4", 00:15:44.209 "traddr": "10.0.0.3", 00:15:44.209 "trsvcid": "4420" 00:15:44.209 }, 00:15:44.209 "secure_channel": true 00:15:44.209 } 00:15:44.209 } 00:15:44.209 ] 00:15:44.209 } 00:15:44.209 ] 00:15:44.209 }' 00:15:44.209 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:44.469 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:15:44.469 "subsystems": [ 00:15:44.469 { 00:15:44.469 "subsystem": "keyring", 00:15:44.469 "config": [ 00:15:44.469 { 00:15:44.469 "method": "keyring_file_add_key", 00:15:44.469 "params": { 00:15:44.469 "name": "key0", 00:15:44.469 "path": "/tmp/tmp.ZeUWax8M3X" 00:15:44.469 } 00:15:44.469 } 00:15:44.469 ] 00:15:44.469 }, 00:15:44.469 { 00:15:44.469 "subsystem": "iobuf", 00:15:44.469 "config": [ 00:15:44.469 { 00:15:44.469 "method": "iobuf_set_options", 00:15:44.469 "params": { 00:15:44.469 "small_pool_count": 8192, 00:15:44.469 "large_pool_count": 1024, 00:15:44.469 "small_bufsize": 8192, 00:15:44.469 "large_bufsize": 135168, 00:15:44.469 "enable_numa": false 00:15:44.469 } 00:15:44.469 } 00:15:44.469 ] 00:15:44.469 }, 00:15:44.469 { 00:15:44.469 "subsystem": "sock", 00:15:44.469 "config": [ 00:15:44.469 { 00:15:44.469 "method": "sock_set_default_impl", 00:15:44.469 "params": { 00:15:44.469 "impl_name": "uring" 00:15:44.469 } 00:15:44.469 }, 00:15:44.469 { 00:15:44.469 "method": "sock_impl_set_options", 00:15:44.469 "params": { 00:15:44.469 "impl_name": "ssl", 00:15:44.469 "recv_buf_size": 4096, 00:15:44.470 "send_buf_size": 4096, 00:15:44.470 "enable_recv_pipe": true, 00:15:44.470 "enable_quickack": false, 00:15:44.470 "enable_placement_id": 0, 00:15:44.470 "enable_zerocopy_send_server": true, 00:15:44.470 "enable_zerocopy_send_client": false, 00:15:44.470 "zerocopy_threshold": 0, 00:15:44.470 "tls_version": 0, 00:15:44.470 "enable_ktls": false 00:15:44.470 } 00:15:44.470 }, 00:15:44.470 { 00:15:44.470 "method": "sock_impl_set_options", 00:15:44.470 "params": { 00:15:44.470 "impl_name": "posix", 00:15:44.470 "recv_buf_size": 2097152, 00:15:44.470 "send_buf_size": 2097152, 00:15:44.470 "enable_recv_pipe": true, 00:15:44.470 "enable_quickack": false, 00:15:44.470 "enable_placement_id": 0, 00:15:44.470 "enable_zerocopy_send_server": true, 00:15:44.470 "enable_zerocopy_send_client": false, 00:15:44.470 "zerocopy_threshold": 0, 00:15:44.470 "tls_version": 0, 00:15:44.470 "enable_ktls": false 00:15:44.470 } 00:15:44.470 }, 00:15:44.470 { 00:15:44.470 "method": "sock_impl_set_options", 00:15:44.470 "params": { 00:15:44.470 "impl_name": "uring", 00:15:44.470 "recv_buf_size": 2097152, 00:15:44.470 "send_buf_size": 2097152, 00:15:44.470 "enable_recv_pipe": true, 00:15:44.470 "enable_quickack": false, 00:15:44.470 "enable_placement_id": 0, 00:15:44.470 "enable_zerocopy_send_server": false, 00:15:44.470 "enable_zerocopy_send_client": false, 00:15:44.470 "zerocopy_threshold": 0, 00:15:44.470 "tls_version": 0, 00:15:44.470 "enable_ktls": false 00:15:44.470 } 00:15:44.470 } 00:15:44.470 ] 00:15:44.470 }, 00:15:44.470 { 00:15:44.470 "subsystem": "vmd", 00:15:44.470 "config": [] 00:15:44.470 }, 00:15:44.470 { 00:15:44.470 "subsystem": "accel", 00:15:44.470 "config": [ 00:15:44.470 { 00:15:44.470 "method": "accel_set_options", 00:15:44.470 "params": { 00:15:44.470 "small_cache_size": 128, 00:15:44.470 "large_cache_size": 16, 00:15:44.470 "task_count": 2048, 00:15:44.470 "sequence_count": 2048, 00:15:44.470 "buf_count": 2048 00:15:44.470 } 00:15:44.470 } 00:15:44.470 ] 00:15:44.470 }, 00:15:44.470 { 00:15:44.470 "subsystem": "bdev", 00:15:44.470 "config": [ 00:15:44.470 { 00:15:44.470 "method": "bdev_set_options", 00:15:44.470 "params": { 00:15:44.470 "bdev_io_pool_size": 65535, 00:15:44.470 "bdev_io_cache_size": 256, 00:15:44.470 "bdev_auto_examine": true, 00:15:44.470 "iobuf_small_cache_size": 128, 00:15:44.470 "iobuf_large_cache_size": 16 00:15:44.470 } 00:15:44.470 }, 00:15:44.470 { 00:15:44.470 "method": "bdev_raid_set_options", 00:15:44.470 "params": { 00:15:44.470 "process_window_size_kb": 1024, 00:15:44.470 "process_max_bandwidth_mb_sec": 0 00:15:44.470 } 00:15:44.470 }, 00:15:44.470 { 00:15:44.470 "method": "bdev_iscsi_set_options", 00:15:44.470 "params": { 00:15:44.470 "timeout_sec": 30 00:15:44.470 } 00:15:44.470 }, 00:15:44.470 { 00:15:44.470 "method": "bdev_nvme_set_options", 00:15:44.470 "params": { 00:15:44.470 "action_on_timeout": "none", 00:15:44.470 "timeout_us": 0, 00:15:44.470 "timeout_admin_us": 0, 00:15:44.470 "keep_alive_timeout_ms": 10000, 00:15:44.470 "arbitration_burst": 0, 00:15:44.470 "low_priority_weight": 0, 00:15:44.470 "medium_priority_weight": 0, 00:15:44.470 "high_priority_weight": 0, 00:15:44.470 "nvme_adminq_poll_period_us": 10000, 00:15:44.470 "nvme_ioq_poll_period_us": 0, 00:15:44.470 "io_queue_requests": 512, 00:15:44.470 "delay_cmd_submit": true, 00:15:44.470 "transport_retry_count": 4, 00:15:44.470 "bdev_retry_count": 3, 00:15:44.470 "transport_ack_timeout": 0, 00:15:44.470 "ctrlr_loss_timeout_sec": 0, 00:15:44.470 "reconnect_delay_sec": 0, 00:15:44.470 "fast_io_fail_timeout_sec": 0, 00:15:44.470 "disable_auto_failback": false, 00:15:44.470 "generate_uuids": false, 00:15:44.470 "transport_tos": 0, 00:15:44.470 "nvme_error_stat": false, 00:15:44.470 "rdma_srq_size": 0, 00:15:44.470 "io_path_stat": false, 00:15:44.470 "allow_accel_sequence": false, 00:15:44.470 "rdma_max_cq_size": 0, 00:15:44.470 "rdma_cm_event_timeout_ms": 0, 00:15:44.470 "dhchap_digests": [ 00:15:44.470 "sha256", 00:15:44.470 "sha384", 00:15:44.470 "sha512" 00:15:44.470 ], 00:15:44.470 "dhchap_dhgroups": [ 00:15:44.470 "null", 00:15:44.470 "ffdhe2048", 00:15:44.470 "ffdhe3072", 00:15:44.470 "ffdhe4096", 00:15:44.470 "ffdhe6144", 00:15:44.470 "ffdhe8192" 00:15:44.470 ] 00:15:44.470 } 00:15:44.470 }, 00:15:44.470 { 00:15:44.470 "method": "bdev_nvme_attach_controller", 00:15:44.470 "params": { 00:15:44.470 "name": "TLSTEST", 00:15:44.470 "trtype": "TCP", 00:15:44.470 "adrfam": "IPv4", 00:15:44.470 "traddr": "10.0.0.3", 00:15:44.470 "trsvcid": "4420", 00:15:44.470 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.470 "prchk_reftag": false, 00:15:44.470 "prchk_guard": false, 00:15:44.470 "ctrlr_loss_timeout_sec": 0, 00:15:44.470 "reconnect_delay_sec": 0, 00:15:44.470 "fast_io_fail_timeout_sec": 0, 00:15:44.470 "psk": "key0", 00:15:44.470 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:44.470 "hdgst": false, 00:15:44.470 "ddgst": false, 00:15:44.470 "multipath": "multipath" 00:15:44.470 } 00:15:44.470 }, 00:15:44.470 { 00:15:44.470 "method": "bdev_nvme_set_hotplug", 00:15:44.470 "params": { 00:15:44.470 "period_us": 100000, 00:15:44.470 "enable": false 00:15:44.470 } 00:15:44.470 }, 00:15:44.470 { 00:15:44.470 "method": "bdev_wait_for_examine" 00:15:44.470 } 00:15:44.470 ] 00:15:44.470 }, 00:15:44.470 { 00:15:44.470 "subsystem": "nbd", 00:15:44.470 "config": [] 00:15:44.470 } 00:15:44.470 ] 00:15:44.470 }' 00:15:44.470 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72064 00:15:44.470 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72064 ']' 00:15:44.470 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72064 00:15:44.470 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:44.470 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:44.470 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72064 00:15:44.470 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:44.470 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:44.470 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72064' 00:15:44.470 killing process with pid 72064 00:15:44.470 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72064 00:15:44.470 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72064 00:15:44.470 Received shutdown signal, test time was about 10.000000 seconds 00:15:44.470 00:15:44.470 Latency(us) 00:15:44.470 [2024-11-19T10:09:58.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:44.470 [2024-11-19T10:09:58.359Z] =================================================================================================================== 00:15:44.470 [2024-11-19T10:09:58.359Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:44.729 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72016 00:15:44.729 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72016 ']' 00:15:44.729 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72016 00:15:44.729 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:44.729 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:44.729 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72016 00:15:44.729 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:44.729 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:44.729 killing process with pid 72016 00:15:44.729 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72016' 00:15:44.729 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72016 00:15:44.729 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72016 00:15:44.988 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:44.988 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:44.988 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:44.988 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:44.988 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:15:44.988 "subsystems": [ 00:15:44.988 { 00:15:44.988 "subsystem": "keyring", 00:15:44.988 "config": [ 00:15:44.988 { 00:15:44.988 "method": "keyring_file_add_key", 00:15:44.988 "params": { 00:15:44.988 "name": "key0", 00:15:44.988 "path": "/tmp/tmp.ZeUWax8M3X" 00:15:44.988 } 00:15:44.988 } 00:15:44.988 ] 00:15:44.988 }, 00:15:44.988 { 00:15:44.988 "subsystem": "iobuf", 00:15:44.988 "config": [ 00:15:44.988 { 00:15:44.988 "method": "iobuf_set_options", 00:15:44.988 "params": { 00:15:44.988 "small_pool_count": 8192, 00:15:44.988 "large_pool_count": 1024, 00:15:44.988 "small_bufsize": 8192, 00:15:44.988 "large_bufsize": 135168, 00:15:44.988 "enable_numa": false 00:15:44.988 } 00:15:44.988 } 00:15:44.988 ] 00:15:44.988 }, 00:15:44.988 { 00:15:44.988 "subsystem": "sock", 00:15:44.988 "config": [ 00:15:44.988 { 00:15:44.988 "method": "sock_set_default_impl", 00:15:44.988 "params": { 00:15:44.988 "impl_name": "uring" 00:15:44.988 } 00:15:44.988 }, 00:15:44.988 { 00:15:44.988 "method": "sock_impl_set_options", 00:15:44.988 "params": { 00:15:44.988 "impl_name": "ssl", 00:15:44.988 "recv_buf_size": 4096, 00:15:44.988 "send_buf_size": 4096, 00:15:44.988 "enable_recv_pipe": true, 00:15:44.988 "enable_quickack": false, 00:15:44.988 "enable_placement_id": 0, 00:15:44.988 "enable_zerocopy_send_server": true, 00:15:44.988 "enable_zerocopy_send_client": false, 00:15:44.988 "zerocopy_threshold": 0, 00:15:44.988 "tls_version": 0, 00:15:44.988 "enable_ktls": false 00:15:44.988 } 00:15:44.989 }, 00:15:44.989 { 00:15:44.989 "method": "sock_impl_set_options", 00:15:44.989 "params": { 00:15:44.989 "impl_name": "posix", 00:15:44.989 "recv_buf_size": 2097152, 00:15:44.989 "send_buf_size": 2097152, 00:15:44.989 "enable_recv_pipe": true, 00:15:44.989 "enable_quickack": false, 00:15:44.989 "enable_placement_id": 0, 00:15:44.989 "enable_zerocopy_send_server": true, 00:15:44.989 "enable_zerocopy_send_client": false, 00:15:44.989 "zerocopy_threshold": 0, 00:15:44.989 "tls_version": 0, 00:15:44.989 "enable_ktls": false 00:15:44.989 } 00:15:44.989 }, 00:15:44.989 { 00:15:44.989 "method": "sock_impl_set_options", 00:15:44.989 "params": { 00:15:44.989 "impl_name": "uring", 00:15:44.989 "recv_buf_size": 2097152, 00:15:44.989 "send_buf_size": 2097152, 00:15:44.989 "enable_recv_pipe": true, 00:15:44.989 "enable_quickack": false, 00:15:44.989 "enable_placement_id": 0, 00:15:44.989 "enable_zerocopy_send_server": false, 00:15:44.989 "enable_zerocopy_send_client": false, 00:15:44.989 "zerocopy_threshold": 0, 00:15:44.989 "tls_version": 0, 00:15:44.989 "enable_ktls": false 00:15:44.989 } 00:15:44.989 } 00:15:44.989 ] 00:15:44.989 }, 00:15:44.989 { 00:15:44.989 "subsystem": "vmd", 00:15:44.989 "config": [] 00:15:44.989 }, 00:15:44.989 { 00:15:44.989 "subsystem": "accel", 00:15:44.989 "config": [ 00:15:44.989 { 00:15:44.989 "method": "accel_set_options", 00:15:44.989 "params": { 00:15:44.989 "small_cache_size": 128, 00:15:44.989 "large_cache_size": 16, 00:15:44.989 "task_count": 2048, 00:15:44.989 "sequence_count": 2048, 00:15:44.989 "buf_count": 2048 00:15:44.989 } 00:15:44.989 } 00:15:44.989 ] 00:15:44.989 }, 00:15:44.989 { 00:15:44.989 "subsystem": "bdev", 00:15:44.989 "config": [ 00:15:44.989 { 00:15:44.989 "method": "bdev_set_options", 00:15:44.989 "params": { 00:15:44.989 "bdev_io_pool_size": 65535, 00:15:44.989 "bdev_io_cache_size": 256, 00:15:44.989 "bdev_auto_examine": true, 00:15:44.989 "iobuf_small_cache_size": 128, 00:15:44.989 "iobuf_large_cache_size": 16 00:15:44.989 } 00:15:44.989 }, 00:15:44.989 { 00:15:44.989 "method": "bdev_raid_set_options", 00:15:44.989 "params": { 00:15:44.989 "process_window_size_kb": 1024, 00:15:44.989 "process_max_bandwidth_mb_sec": 0 00:15:44.989 } 00:15:44.989 }, 00:15:44.989 { 00:15:44.989 "method": "bdev_iscsi_set_options", 00:15:44.989 "params": { 00:15:44.989 "timeout_sec": 30 00:15:44.989 } 00:15:44.989 }, 00:15:44.989 { 00:15:44.989 "method": "bdev_nvme_set_options", 00:15:44.989 "params": { 00:15:44.989 "action_on_timeout": "none", 00:15:44.989 "timeout_us": 0, 00:15:44.989 "timeout_admin_us": 0, 00:15:44.989 "keep_alive_timeout_ms": 10000, 00:15:44.989 "arbitration_burst": 0, 00:15:44.989 "low_priority_weight": 0, 00:15:44.989 "medium_priority_weight": 0, 00:15:44.989 "high_priority_weight": 0, 00:15:44.989 "nvme_adminq_poll_period_us": 10000, 00:15:44.989 "nvme_ioq_poll_period_us": 0, 00:15:44.989 "io_queue_requests": 0, 00:15:44.989 "delay_cmd_submit": true, 00:15:44.989 "transport_retry_count": 4, 00:15:44.989 "bdev_retry_count": 3, 00:15:44.989 "transport_ack_timeout": 0, 00:15:44.989 "ctrlr_loss_timeout_sec": 0, 00:15:44.989 "reconnect_delay_sec": 0, 00:15:44.989 "fast_io_fail_timeout_sec": 0, 00:15:44.989 "disable_auto_failback": false, 00:15:44.989 "generate_uuids": false, 00:15:44.989 "transport_tos": 0, 00:15:44.989 "nvme_error_stat": false, 00:15:44.989 "rdma_srq_size": 0, 00:15:44.989 "io_path_stat": false, 00:15:44.989 "allow_accel_sequence": false, 00:15:44.989 "rdma_max_cq_size": 0, 00:15:44.989 "rdma_cm_event_timeout_ms": 0, 00:15:44.989 "dhchap_digests": [ 00:15:44.989 "sha256", 00:15:44.989 "sha384", 00:15:44.989 "sha512" 00:15:44.989 ], 00:15:44.989 "dhchap_dhgroups": [ 00:15:44.989 "null", 00:15:44.989 "ffdhe2048", 00:15:44.989 "ffdhe3072", 00:15:44.989 "ffdhe4096", 00:15:44.989 "ffdhe6144", 00:15:44.989 "ffdhe8192" 00:15:44.989 ] 00:15:44.989 } 00:15:44.989 }, 00:15:44.989 { 00:15:44.989 "method": "bdev_nvme_set_hotplug", 00:15:44.989 "params": { 00:15:44.989 "period_us": 100000, 00:15:44.989 "enable": false 00:15:44.989 } 00:15:44.989 }, 00:15:44.989 { 00:15:44.989 "method": "bdev_malloc_create", 00:15:44.989 "params": { 00:15:44.989 "name": "malloc0", 00:15:44.989 "num_blocks": 8192, 00:15:44.989 "block_size": 4096, 00:15:44.989 "physical_block_size": 4096, 00:15:44.989 "uuid": "4fd95d33-a0e7-43a8-a436-a432351085b6", 00:15:44.989 "optimal_io_boundary": 0, 00:15:44.989 "md_size": 0, 00:15:44.989 "dif_type": 0, 00:15:44.989 "dif_is_head_of_md": false, 00:15:44.989 "dif_pi_format": 0 00:15:44.989 } 00:15:44.989 }, 00:15:44.989 { 00:15:44.989 "method": "bdev_wait_for_examine" 00:15:44.989 } 00:15:44.989 ] 00:15:44.989 }, 00:15:44.989 { 00:15:44.989 "subsystem": "nbd", 00:15:44.989 "config": [] 00:15:44.989 }, 00:15:44.989 { 00:15:44.989 "subsystem": "scheduler", 00:15:44.989 "config": [ 00:15:44.989 { 00:15:44.989 "method": "framework_set_scheduler", 00:15:44.989 "params": { 00:15:44.989 "name": "static" 00:15:44.989 } 00:15:44.989 } 00:15:44.989 ] 00:15:44.989 }, 00:15:44.989 { 00:15:44.989 "subsystem": "nvmf", 00:15:44.989 "config": [ 00:15:44.989 { 00:15:44.989 "method": "nvmf_set_config", 00:15:44.989 "params": { 00:15:44.989 "discovery_filter": "match_any", 00:15:44.989 "admin_cmd_passthru": { 00:15:44.989 "identify_ctrlr": false 00:15:44.989 }, 00:15:44.989 "dhchap_digests": [ 00:15:44.989 "sha256", 00:15:44.989 "sha384", 00:15:44.989 "sha512" 00:15:44.989 ], 00:15:44.989 "dhchap_dhgroups": [ 00:15:44.989 "null", 00:15:44.989 "ffdhe2048", 00:15:44.989 "ffdhe3072", 00:15:44.989 "ffdhe4096", 00:15:44.989 "ffdhe6144", 00:15:44.989 "ffdhe8192" 00:15:44.989 ] 00:15:44.989 } 00:15:44.989 }, 00:15:44.989 { 00:15:44.989 "method": "nvmf_set_max_subsystems", 00:15:44.989 "params": { 00:15:44.989 "max_subsystems": 1024 00:15:44.989 } 00:15:44.989 }, 00:15:44.989 { 00:15:44.989 "method": "nvmf_set_crdt", 00:15:44.989 "params": { 00:15:44.989 "crdt1": 0, 00:15:44.989 "crdt2": 0, 00:15:44.989 "crdt3": 0 00:15:44.989 } 00:15:44.989 }, 00:15:44.989 { 00:15:44.989 "method": "nvmf_create_transport", 00:15:44.989 "params": { 00:15:44.989 "trtype": "TCP", 00:15:44.989 "max_queue_depth": 128, 00:15:44.989 "max_io_qpairs_per_ctrlr": 127, 00:15:44.989 "in_capsule_data_size": 4096, 00:15:44.989 "max_io_size": 131072, 00:15:44.989 "io_unit_size": 131072, 00:15:44.989 "max_aq_depth": 128, 00:15:44.989 "num_shared_buffers": 511, 00:15:44.989 "buf_cache_size": 4294967295, 00:15:44.989 "dif_insert_or_strip": false, 00:15:44.989 "zcopy": false, 00:15:44.989 "c2h_success": false, 00:15:44.989 "sock_priority": 0, 00:15:44.989 "abort_timeout_sec": 1, 00:15:44.989 "ack_timeout": 0, 00:15:44.989 "data_wr_pool_size": 0 00:15:44.989 } 00:15:44.989 }, 00:15:44.989 { 00:15:44.989 "method": "nvmf_create_subsystem", 00:15:44.989 "params": { 00:15:44.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.989 "allow_any_host": false, 00:15:44.989 "serial_number": "SPDK00000000000001", 00:15:44.989 "model_number": "SPDK bdev Controller", 00:15:44.989 "max_namespaces": 10, 00:15:44.989 "min_cntlid": 1, 00:15:44.989 "max_cntlid": 65519, 00:15:44.989 "ana_reporting": false 00:15:44.989 } 00:15:44.989 }, 00:15:44.990 { 00:15:44.990 "method": "nvmf_subsystem_add_host", 00:15:44.990 "params": { 00:15:44.990 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.990 "host": "nqn.2016-06.io.spdk:host1", 00:15:44.990 "psk": "key0" 00:15:44.990 } 00:15:44.990 }, 00:15:44.990 { 00:15:44.990 "method": "nvmf_subsystem_add_ns", 00:15:44.990 "params": { 00:15:44.990 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.990 "namespace": { 00:15:44.990 "nsid": 1, 00:15:44.990 "bdev_name": "malloc0", 00:15:44.990 "nguid": "4FD95D33A0E743A8A436A432351085B6", 00:15:44.990 "uuid": "4fd95d33-a0e7-43a8-a436-a432351085b6", 00:15:44.990 "no_auto_visible": false 00:15:44.990 } 00:15:44.990 } 00:15:44.990 }, 00:15:44.990 { 00:15:44.990 "method": "nvmf_subsystem_add_listener", 00:15:44.990 "params": { 00:15:44.990 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.990 "listen_address": { 00:15:44.990 "trtype": "TCP", 00:15:44.990 "adrfam": "IPv4", 00:15:44.990 "traddr": "10.0.0.3", 00:15:44.990 "trsvcid": "4420" 00:15:44.990 }, 00:15:44.990 "secure_channel": true 00:15:44.990 } 00:15:44.990 } 00:15:44.990 ] 00:15:44.990 } 00:15:44.990 ] 00:15:44.990 }' 00:15:44.990 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72107 00:15:44.990 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72107 00:15:44.990 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:44.990 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72107 ']' 00:15:44.990 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.990 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.990 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.990 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.990 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:44.990 [2024-11-19 10:09:58.692709] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:15:44.990 [2024-11-19 10:09:58.692829] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.990 [2024-11-19 10:09:58.842639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.248 [2024-11-19 10:09:58.900992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:45.248 [2024-11-19 10:09:58.901049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:45.248 [2024-11-19 10:09:58.901061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:45.248 [2024-11-19 10:09:58.901071] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:45.248 [2024-11-19 10:09:58.901078] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:45.248 [2024-11-19 10:09:58.901521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:45.248 [2024-11-19 10:09:59.068726] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:45.504 [2024-11-19 10:09:59.147867] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:45.504 [2024-11-19 10:09:59.179800] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:45.504 [2024-11-19 10:09:59.180035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:46.072 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.072 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:46.072 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:46.072 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:46.072 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:46.072 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.072 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72145 00:15:46.072 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72145 /var/tmp/bdevperf.sock 00:15:46.072 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72145 ']' 00:15:46.072 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:46.072 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:46.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:46.072 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:46.072 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:46.072 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:46.072 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:46.072 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:15:46.072 "subsystems": [ 00:15:46.072 { 00:15:46.072 "subsystem": "keyring", 00:15:46.072 "config": [ 00:15:46.072 { 00:15:46.072 "method": "keyring_file_add_key", 00:15:46.072 "params": { 00:15:46.072 "name": "key0", 00:15:46.072 "path": "/tmp/tmp.ZeUWax8M3X" 00:15:46.072 } 00:15:46.072 } 00:15:46.072 ] 00:15:46.072 }, 00:15:46.072 { 00:15:46.072 "subsystem": "iobuf", 00:15:46.072 "config": [ 00:15:46.072 { 00:15:46.072 "method": "iobuf_set_options", 00:15:46.072 "params": { 00:15:46.072 "small_pool_count": 8192, 00:15:46.072 "large_pool_count": 1024, 00:15:46.072 "small_bufsize": 8192, 00:15:46.072 "large_bufsize": 135168, 00:15:46.072 "enable_numa": false 00:15:46.072 } 00:15:46.072 } 00:15:46.072 ] 00:15:46.072 }, 00:15:46.072 { 00:15:46.072 "subsystem": "sock", 00:15:46.072 "config": [ 00:15:46.072 { 00:15:46.072 "method": "sock_set_default_impl", 00:15:46.072 "params": { 00:15:46.072 "impl_name": "uring" 00:15:46.072 } 00:15:46.072 }, 00:15:46.072 { 00:15:46.072 "method": "sock_impl_set_options", 00:15:46.072 "params": { 00:15:46.072 "impl_name": "ssl", 00:15:46.072 "recv_buf_size": 4096, 00:15:46.072 "send_buf_size": 4096, 00:15:46.072 "enable_recv_pipe": true, 00:15:46.072 "enable_quickack": false, 00:15:46.072 "enable_placement_id": 0, 00:15:46.072 "enable_zerocopy_send_server": true, 00:15:46.072 "enable_zerocopy_send_client": false, 00:15:46.072 "zerocopy_threshold": 0, 00:15:46.072 "tls_version": 0, 00:15:46.072 "enable_ktls": false 00:15:46.072 } 00:15:46.072 }, 00:15:46.072 { 00:15:46.072 "method": "sock_impl_set_options", 00:15:46.072 "params": { 00:15:46.072 "impl_name": "posix", 00:15:46.072 "recv_buf_size": 2097152, 00:15:46.072 "send_buf_size": 2097152, 00:15:46.072 "enable_recv_pipe": true, 00:15:46.072 "enable_quickack": false, 00:15:46.072 "enable_placement_id": 0, 00:15:46.072 "enable_zerocopy_send_server": true, 00:15:46.072 "enable_zerocopy_send_client": false, 00:15:46.072 "zerocopy_threshold": 0, 00:15:46.072 "tls_version": 0, 00:15:46.072 "enable_ktls": false 00:15:46.072 } 00:15:46.072 }, 00:15:46.072 { 00:15:46.072 "method": "sock_impl_set_options", 00:15:46.072 "params": { 00:15:46.072 "impl_name": "uring", 00:15:46.072 "recv_buf_size": 2097152, 00:15:46.072 "send_buf_size": 2097152, 00:15:46.072 "enable_recv_pipe": true, 00:15:46.072 "enable_quickack": false, 00:15:46.072 "enable_placement_id": 0, 00:15:46.072 "enable_zerocopy_send_server": false, 00:15:46.072 "enable_zerocopy_send_client": false, 00:15:46.072 "zerocopy_threshold": 0, 00:15:46.072 "tls_version": 0, 00:15:46.072 "enable_ktls": false 00:15:46.072 } 00:15:46.072 } 00:15:46.072 ] 00:15:46.072 }, 00:15:46.072 { 00:15:46.072 "subsystem": "vmd", 00:15:46.072 "config": [] 00:15:46.072 }, 00:15:46.072 { 00:15:46.072 "subsystem": "accel", 00:15:46.072 "config": [ 00:15:46.072 { 00:15:46.072 "method": "accel_set_options", 00:15:46.072 "params": { 00:15:46.072 "small_cache_size": 128, 00:15:46.072 "large_cache_size": 16, 00:15:46.072 "task_count": 2048, 00:15:46.072 "sequence_count": 2048, 00:15:46.072 "buf_count": 2048 00:15:46.072 } 00:15:46.072 } 00:15:46.072 ] 00:15:46.072 }, 00:15:46.072 { 00:15:46.072 "subsystem": "bdev", 00:15:46.072 "config": [ 00:15:46.072 { 00:15:46.072 "method": "bdev_set_options", 00:15:46.072 "params": { 00:15:46.072 "bdev_io_pool_size": 65535, 00:15:46.072 "bdev_io_cache_size": 256, 00:15:46.072 "bdev_auto_examine": true, 00:15:46.072 "iobuf_small_cache_size": 128, 00:15:46.072 "iobuf_large_cache_size": 16 00:15:46.072 } 00:15:46.072 }, 00:15:46.072 { 00:15:46.072 "method": "bdev_raid_set_options", 00:15:46.072 "params": { 00:15:46.072 "process_window_size_kb": 1024, 00:15:46.072 "process_max_bandwidth_mb_sec": 0 00:15:46.072 } 00:15:46.072 }, 00:15:46.072 { 00:15:46.072 "method": "bdev_iscsi_set_options", 00:15:46.072 "params": { 00:15:46.072 "timeout_sec": 30 00:15:46.072 } 00:15:46.072 }, 00:15:46.072 { 00:15:46.072 "method": "bdev_nvme_set_options", 00:15:46.072 "params": { 00:15:46.073 "action_on_timeout": "none", 00:15:46.073 "timeout_us": 0, 00:15:46.073 "timeout_admin_us": 0, 00:15:46.073 "keep_alive_timeout_ms": 10000, 00:15:46.073 "arbitration_burst": 0, 00:15:46.073 "low_priority_weight": 0, 00:15:46.073 "medium_priority_weight": 0, 00:15:46.073 "high_priority_weight": 0, 00:15:46.073 "nvme_adminq_poll_period_us": 10000, 00:15:46.073 "nvme_ioq_poll_period_us": 0, 00:15:46.073 "io_queue_requests": 512, 00:15:46.073 "delay_cmd_submit": true, 00:15:46.073 "transport_retry_count": 4, 00:15:46.073 "bdev_retry_count": 3, 00:15:46.073 "transport_ack_timeout": 0, 00:15:46.073 "ctrlr_loss_timeout_sec": 0, 00:15:46.073 "reconnect_delay_sec": 0, 00:15:46.073 "fast_io_fail_timeout_sec": 0, 00:15:46.073 "disable_auto_failback": false, 00:15:46.073 "generate_uuids": false, 00:15:46.073 "transport_tos": 0, 00:15:46.073 "nvme_error_stat": false, 00:15:46.073 "rdma_srq_size": 0, 00:15:46.073 "io_path_stat": false, 00:15:46.073 "allow_accel_sequence": false, 00:15:46.073 "rdma_max_cq_size": 0, 00:15:46.073 "rdma_cm_event_timeout_ms": 0, 00:15:46.073 "dhchap_digests": [ 00:15:46.073 "sha256", 00:15:46.073 "sha384", 00:15:46.073 "sha512" 00:15:46.073 ], 00:15:46.073 "dhchap_dhgroups": [ 00:15:46.073 "null", 00:15:46.073 "ffdhe2048", 00:15:46.073 "ffdhe3072", 00:15:46.073 "ffdhe4096", 00:15:46.073 "ffdhe6144", 00:15:46.073 "ffdhe8192" 00:15:46.073 ] 00:15:46.073 } 00:15:46.073 }, 00:15:46.073 { 00:15:46.073 "method": "bdev_nvme_attach_controller", 00:15:46.073 "params": { 00:15:46.073 "name": "TLSTEST", 00:15:46.073 "trtype": "TCP", 00:15:46.073 "adrfam": "IPv4", 00:15:46.073 "traddr": "10.0.0.3", 00:15:46.073 "trsvcid": "4420", 00:15:46.073 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:46.073 "prchk_reftag": false, 00:15:46.073 "prchk_guard": false, 00:15:46.073 "ctrlr_loss_timeout_sec": 0, 00:15:46.073 "reconnect_delay_sec": 0, 00:15:46.073 "fast_io_fail_timeout_sec": 0, 00:15:46.073 "psk": "key0", 00:15:46.073 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:46.073 "hdgst": false, 00:15:46.073 "ddgst": false, 00:15:46.073 "multipath": "multipath" 00:15:46.073 } 00:15:46.073 }, 00:15:46.073 { 00:15:46.073 "method": "bdev_nvme_set_hotplug", 00:15:46.073 "params": { 00:15:46.073 "period_us": 100000, 00:15:46.073 "enable": false 00:15:46.073 } 00:15:46.073 }, 00:15:46.073 { 00:15:46.073 "method": "bdev_wait_for_examine" 00:15:46.073 } 00:15:46.073 ] 00:15:46.073 }, 00:15:46.073 { 00:15:46.073 "subsystem": "nbd", 00:15:46.073 "config": [] 00:15:46.073 } 00:15:46.073 ] 00:15:46.073 }' 00:15:46.073 [2024-11-19 10:09:59.898327] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:15:46.073 [2024-11-19 10:09:59.898435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72145 ] 00:15:46.333 [2024-11-19 10:10:00.047782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.333 [2024-11-19 10:10:00.112781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.592 [2024-11-19 10:10:00.249251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:46.592 [2024-11-19 10:10:00.298481] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:47.159 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:47.160 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:47.160 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:47.418 Running I/O for 10 seconds... 00:15:49.286 4073.00 IOPS, 15.91 MiB/s [2024-11-19T10:10:04.551Z] 4113.00 IOPS, 16.07 MiB/s [2024-11-19T10:10:05.486Z] 4124.67 IOPS, 16.11 MiB/s [2024-11-19T10:10:06.420Z] 4122.00 IOPS, 16.10 MiB/s [2024-11-19T10:10:07.354Z] 4123.00 IOPS, 16.11 MiB/s [2024-11-19T10:10:08.290Z] 4126.67 IOPS, 16.12 MiB/s [2024-11-19T10:10:09.226Z] 4131.00 IOPS, 16.14 MiB/s [2024-11-19T10:10:10.162Z] 4135.00 IOPS, 16.15 MiB/s [2024-11-19T10:10:11.536Z] 4134.11 IOPS, 16.15 MiB/s [2024-11-19T10:10:11.536Z] 4134.50 IOPS, 16.15 MiB/s 00:15:57.647 Latency(us) 00:15:57.647 [2024-11-19T10:10:11.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.647 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:57.647 Verification LBA range: start 0x0 length 0x2000 00:15:57.647 TLSTESTn1 : 10.02 4140.72 16.17 0.00 0.00 30857.46 5004.57 24665.37 00:15:57.647 [2024-11-19T10:10:11.536Z] =================================================================================================================== 00:15:57.647 [2024-11-19T10:10:11.536Z] Total : 4140.72 16.17 0.00 0.00 30857.46 5004.57 24665.37 00:15:57.647 { 00:15:57.647 "results": [ 00:15:57.647 { 00:15:57.647 "job": "TLSTESTn1", 00:15:57.647 "core_mask": "0x4", 00:15:57.647 "workload": "verify", 00:15:57.647 "status": "finished", 00:15:57.647 "verify_range": { 00:15:57.647 "start": 0, 00:15:57.647 "length": 8192 00:15:57.647 }, 00:15:57.647 "queue_depth": 128, 00:15:57.647 "io_size": 4096, 00:15:57.647 "runtime": 10.01565, 00:15:57.647 "iops": 4140.719773554388, 00:15:57.647 "mibps": 16.174686615446827, 00:15:57.647 "io_failed": 0, 00:15:57.647 "io_timeout": 0, 00:15:57.647 "avg_latency_us": 30857.457418630755, 00:15:57.647 "min_latency_us": 5004.567272727273, 00:15:57.647 "max_latency_us": 24665.36727272727 00:15:57.647 } 00:15:57.647 ], 00:15:57.647 "core_count": 1 00:15:57.647 } 00:15:57.647 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:57.647 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72145 00:15:57.647 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72145 ']' 00:15:57.647 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72145 00:15:57.647 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:57.647 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:57.647 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72145 00:15:57.647 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:57.647 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:57.647 killing process with pid 72145 00:15:57.647 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72145' 00:15:57.647 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72145 00:15:57.647 Received shutdown signal, test time was about 10.000000 seconds 00:15:57.647 00:15:57.647 Latency(us) 00:15:57.647 [2024-11-19T10:10:11.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.647 [2024-11-19T10:10:11.536Z] =================================================================================================================== 00:15:57.647 [2024-11-19T10:10:11.536Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:57.647 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72145 00:15:57.647 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72107 00:15:57.647 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72107 ']' 00:15:57.647 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72107 00:15:57.647 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:57.647 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:57.647 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72107 00:15:57.647 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:57.647 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:57.647 killing process with pid 72107 00:15:57.647 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72107' 00:15:57.647 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72107 00:15:57.647 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72107 00:15:57.906 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:15:57.906 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:57.906 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:57.906 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:57.906 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72283 00:15:57.906 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:57.906 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72283 00:15:57.906 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72283 ']' 00:15:57.906 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.906 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.906 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.906 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.906 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:57.906 [2024-11-19 10:10:11.700116] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:15:57.906 [2024-11-19 10:10:11.700226] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.165 [2024-11-19 10:10:11.850670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.165 [2024-11-19 10:10:11.911801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.165 [2024-11-19 10:10:11.911860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.165 [2024-11-19 10:10:11.911872] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.165 [2024-11-19 10:10:11.911881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.165 [2024-11-19 10:10:11.911889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.165 [2024-11-19 10:10:11.912337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.165 [2024-11-19 10:10:11.966289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:58.165 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.165 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:58.165 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:58.165 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:58.165 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:58.424 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.424 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.ZeUWax8M3X 00:15:58.424 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZeUWax8M3X 00:15:58.424 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:58.682 [2024-11-19 10:10:12.356701] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.682 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:58.941 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:59.200 [2024-11-19 10:10:12.884826] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:59.200 [2024-11-19 10:10:12.885087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:59.200 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:59.458 malloc0 00:15:59.458 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:59.716 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZeUWax8M3X 00:15:59.992 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:00.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:00.251 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:00.251 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72330 00:16:00.251 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:00.251 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72330 /var/tmp/bdevperf.sock 00:16:00.251 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72330 ']' 00:16:00.251 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:00.251 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:00.251 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:00.251 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:00.251 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:00.251 [2024-11-19 10:10:13.974220] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:16:00.251 [2024-11-19 10:10:13.974323] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72330 ] 00:16:00.251 [2024-11-19 10:10:14.123600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.511 [2024-11-19 10:10:14.192462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.511 [2024-11-19 10:10:14.250370] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:00.511 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:00.511 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:00.511 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZeUWax8M3X 00:16:00.769 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:01.338 [2024-11-19 10:10:14.939066] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:01.338 nvme0n1 00:16:01.338 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:01.338 Running I/O for 1 seconds... 00:16:02.710 3876.00 IOPS, 15.14 MiB/s 00:16:02.710 Latency(us) 00:16:02.710 [2024-11-19T10:10:16.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.710 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:02.710 Verification LBA range: start 0x0 length 0x2000 00:16:02.710 nvme0n1 : 1.02 3917.35 15.30 0.00 0.00 32220.61 4021.53 19899.11 00:16:02.710 [2024-11-19T10:10:16.599Z] =================================================================================================================== 00:16:02.710 [2024-11-19T10:10:16.599Z] Total : 3917.35 15.30 0.00 0.00 32220.61 4021.53 19899.11 00:16:02.710 { 00:16:02.710 "results": [ 00:16:02.710 { 00:16:02.710 "job": "nvme0n1", 00:16:02.710 "core_mask": "0x2", 00:16:02.710 "workload": "verify", 00:16:02.710 "status": "finished", 00:16:02.710 "verify_range": { 00:16:02.710 "start": 0, 00:16:02.710 "length": 8192 00:16:02.710 }, 00:16:02.710 "queue_depth": 128, 00:16:02.710 "io_size": 4096, 00:16:02.710 "runtime": 1.02212, 00:16:02.710 "iops": 3917.348256564787, 00:16:02.710 "mibps": 15.302141627206199, 00:16:02.710 "io_failed": 0, 00:16:02.710 "io_timeout": 0, 00:16:02.710 "avg_latency_us": 32220.610574879665, 00:16:02.710 "min_latency_us": 4021.5272727272727, 00:16:02.710 "max_latency_us": 19899.112727272728 00:16:02.710 } 00:16:02.710 ], 00:16:02.710 "core_count": 1 00:16:02.710 } 00:16:02.710 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72330 00:16:02.710 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72330 ']' 00:16:02.710 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72330 00:16:02.710 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:02.710 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.710 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72330 00:16:02.710 killing process with pid 72330 00:16:02.710 Received shutdown signal, test time was about 1.000000 seconds 00:16:02.710 00:16:02.710 Latency(us) 00:16:02.711 [2024-11-19T10:10:16.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.711 [2024-11-19T10:10:16.600Z] =================================================================================================================== 00:16:02.711 [2024-11-19T10:10:16.600Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:02.711 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:02.711 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:02.711 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72330' 00:16:02.711 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72330 00:16:02.711 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72330 00:16:02.711 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72283 00:16:02.711 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72283 ']' 00:16:02.711 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72283 00:16:02.711 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:02.711 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.711 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72283 00:16:02.711 killing process with pid 72283 00:16:02.711 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:02.711 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:02.711 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72283' 00:16:02.711 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72283 00:16:02.711 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72283 00:16:02.968 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:16:02.968 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:02.968 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:02.968 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:02.968 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72375 00:16:02.969 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:02.969 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72375 00:16:02.969 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72375 ']' 00:16:02.969 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.969 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:02.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.969 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.969 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:02.969 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:02.969 [2024-11-19 10:10:16.765258] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:16:02.969 [2024-11-19 10:10:16.765375] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.227 [2024-11-19 10:10:16.915447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.227 [2024-11-19 10:10:16.976985] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.227 [2024-11-19 10:10:16.977048] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.227 [2024-11-19 10:10:16.977060] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.227 [2024-11-19 10:10:16.977069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.227 [2024-11-19 10:10:16.977076] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.227 [2024-11-19 10:10:16.977472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.227 [2024-11-19 10:10:17.033369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:03.227 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:03.227 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:03.227 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:03.227 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:03.227 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:03.485 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.485 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:16:03.485 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.485 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:03.485 [2024-11-19 10:10:17.143128] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:03.485 malloc0 00:16:03.485 [2024-11-19 10:10:17.175065] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:03.485 [2024-11-19 10:10:17.175282] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:03.485 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.485 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72400 00:16:03.485 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:03.485 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72400 /var/tmp/bdevperf.sock 00:16:03.485 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72400 ']' 00:16:03.485 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:03.485 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:03.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:03.485 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:03.485 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:03.485 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:03.485 [2024-11-19 10:10:17.280566] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:16:03.486 [2024-11-19 10:10:17.280663] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72400 ] 00:16:03.742 [2024-11-19 10:10:17.436468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.742 [2024-11-19 10:10:17.502492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.742 [2024-11-19 10:10:17.560639] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:03.742 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:03.742 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:03.742 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZeUWax8M3X 00:16:04.307 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:04.307 [2024-11-19 10:10:18.194435] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:04.566 nvme0n1 00:16:04.566 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:04.566 Running I/O for 1 seconds... 00:16:05.940 3840.00 IOPS, 15.00 MiB/s 00:16:05.940 Latency(us) 00:16:05.940 [2024-11-19T10:10:19.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.940 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:05.940 Verification LBA range: start 0x0 length 0x2000 00:16:05.940 nvme0n1 : 1.02 3883.04 15.17 0.00 0.00 32600.75 11081.54 24069.59 00:16:05.940 [2024-11-19T10:10:19.829Z] =================================================================================================================== 00:16:05.940 [2024-11-19T10:10:19.830Z] Total : 3883.04 15.17 0.00 0.00 32600.75 11081.54 24069.59 00:16:05.941 { 00:16:05.941 "results": [ 00:16:05.941 { 00:16:05.941 "job": "nvme0n1", 00:16:05.941 "core_mask": "0x2", 00:16:05.941 "workload": "verify", 00:16:05.941 "status": "finished", 00:16:05.941 "verify_range": { 00:16:05.941 "start": 0, 00:16:05.941 "length": 8192 00:16:05.941 }, 00:16:05.941 "queue_depth": 128, 00:16:05.941 "io_size": 4096, 00:16:05.941 "runtime": 1.021879, 00:16:05.941 "iops": 3883.0429042968885, 00:16:05.941 "mibps": 15.16813634490972, 00:16:05.941 "io_failed": 0, 00:16:05.941 "io_timeout": 0, 00:16:05.941 "avg_latency_us": 32600.751671554248, 00:16:05.941 "min_latency_us": 11081.541818181819, 00:16:05.941 "max_latency_us": 24069.585454545453 00:16:05.941 } 00:16:05.941 ], 00:16:05.941 "core_count": 1 00:16:05.941 } 00:16:05.941 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:16:05.941 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.941 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:05.941 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.941 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:16:05.941 "subsystems": [ 00:16:05.941 { 00:16:05.941 "subsystem": "keyring", 00:16:05.941 "config": [ 00:16:05.941 { 00:16:05.941 "method": "keyring_file_add_key", 00:16:05.941 "params": { 00:16:05.941 "name": "key0", 00:16:05.941 "path": "/tmp/tmp.ZeUWax8M3X" 00:16:05.941 } 00:16:05.941 } 00:16:05.941 ] 00:16:05.941 }, 00:16:05.941 { 00:16:05.941 "subsystem": "iobuf", 00:16:05.941 "config": [ 00:16:05.941 { 00:16:05.941 "method": "iobuf_set_options", 00:16:05.941 "params": { 00:16:05.941 "small_pool_count": 8192, 00:16:05.941 "large_pool_count": 1024, 00:16:05.941 "small_bufsize": 8192, 00:16:05.941 "large_bufsize": 135168, 00:16:05.941 "enable_numa": false 00:16:05.941 } 00:16:05.941 } 00:16:05.941 ] 00:16:05.941 }, 00:16:05.941 { 00:16:05.941 "subsystem": "sock", 00:16:05.941 "config": [ 00:16:05.941 { 00:16:05.941 "method": "sock_set_default_impl", 00:16:05.941 "params": { 00:16:05.941 "impl_name": "uring" 00:16:05.941 } 00:16:05.941 }, 00:16:05.941 { 00:16:05.941 "method": "sock_impl_set_options", 00:16:05.941 "params": { 00:16:05.941 "impl_name": "ssl", 00:16:05.941 "recv_buf_size": 4096, 00:16:05.941 "send_buf_size": 4096, 00:16:05.941 "enable_recv_pipe": true, 00:16:05.941 "enable_quickack": false, 00:16:05.941 "enable_placement_id": 0, 00:16:05.941 "enable_zerocopy_send_server": true, 00:16:05.941 "enable_zerocopy_send_client": false, 00:16:05.941 "zerocopy_threshold": 0, 00:16:05.941 "tls_version": 0, 00:16:05.941 "enable_ktls": false 00:16:05.941 } 00:16:05.941 }, 00:16:05.941 { 00:16:05.941 "method": "sock_impl_set_options", 00:16:05.941 "params": { 00:16:05.941 "impl_name": "posix", 00:16:05.941 "recv_buf_size": 2097152, 00:16:05.941 "send_buf_size": 2097152, 00:16:05.941 "enable_recv_pipe": true, 00:16:05.941 "enable_quickack": false, 00:16:05.941 "enable_placement_id": 0, 00:16:05.941 "enable_zerocopy_send_server": true, 00:16:05.941 "enable_zerocopy_send_client": false, 00:16:05.941 "zerocopy_threshold": 0, 00:16:05.941 "tls_version": 0, 00:16:05.941 "enable_ktls": false 00:16:05.941 } 00:16:05.941 }, 00:16:05.941 { 00:16:05.941 "method": "sock_impl_set_options", 00:16:05.941 "params": { 00:16:05.941 "impl_name": "uring", 00:16:05.941 "recv_buf_size": 2097152, 00:16:05.941 "send_buf_size": 2097152, 00:16:05.941 "enable_recv_pipe": true, 00:16:05.941 "enable_quickack": false, 00:16:05.941 "enable_placement_id": 0, 00:16:05.941 "enable_zerocopy_send_server": false, 00:16:05.941 "enable_zerocopy_send_client": false, 00:16:05.941 "zerocopy_threshold": 0, 00:16:05.941 "tls_version": 0, 00:16:05.941 "enable_ktls": false 00:16:05.941 } 00:16:05.941 } 00:16:05.941 ] 00:16:05.941 }, 00:16:05.941 { 00:16:05.941 "subsystem": "vmd", 00:16:05.941 "config": [] 00:16:05.941 }, 00:16:05.941 { 00:16:05.941 "subsystem": "accel", 00:16:05.941 "config": [ 00:16:05.941 { 00:16:05.941 "method": "accel_set_options", 00:16:05.941 "params": { 00:16:05.941 "small_cache_size": 128, 00:16:05.941 "large_cache_size": 16, 00:16:05.941 "task_count": 2048, 00:16:05.941 "sequence_count": 2048, 00:16:05.941 "buf_count": 2048 00:16:05.941 } 00:16:05.941 } 00:16:05.941 ] 00:16:05.941 }, 00:16:05.941 { 00:16:05.941 "subsystem": "bdev", 00:16:05.941 "config": [ 00:16:05.941 { 00:16:05.941 "method": "bdev_set_options", 00:16:05.941 "params": { 00:16:05.941 "bdev_io_pool_size": 65535, 00:16:05.941 "bdev_io_cache_size": 256, 00:16:05.941 "bdev_auto_examine": true, 00:16:05.941 "iobuf_small_cache_size": 128, 00:16:05.941 "iobuf_large_cache_size": 16 00:16:05.941 } 00:16:05.941 }, 00:16:05.941 { 00:16:05.941 "method": "bdev_raid_set_options", 00:16:05.941 "params": { 00:16:05.941 "process_window_size_kb": 1024, 00:16:05.941 "process_max_bandwidth_mb_sec": 0 00:16:05.941 } 00:16:05.941 }, 00:16:05.941 { 00:16:05.941 "method": "bdev_iscsi_set_options", 00:16:05.941 "params": { 00:16:05.941 "timeout_sec": 30 00:16:05.941 } 00:16:05.941 }, 00:16:05.941 { 00:16:05.941 "method": "bdev_nvme_set_options", 00:16:05.941 "params": { 00:16:05.941 "action_on_timeout": "none", 00:16:05.941 "timeout_us": 0, 00:16:05.941 "timeout_admin_us": 0, 00:16:05.941 "keep_alive_timeout_ms": 10000, 00:16:05.941 "arbitration_burst": 0, 00:16:05.941 "low_priority_weight": 0, 00:16:05.941 "medium_priority_weight": 0, 00:16:05.941 "high_priority_weight": 0, 00:16:05.941 "nvme_adminq_poll_period_us": 10000, 00:16:05.941 "nvme_ioq_poll_period_us": 0, 00:16:05.941 "io_queue_requests": 0, 00:16:05.941 "delay_cmd_submit": true, 00:16:05.941 "transport_retry_count": 4, 00:16:05.941 "bdev_retry_count": 3, 00:16:05.941 "transport_ack_timeout": 0, 00:16:05.941 "ctrlr_loss_timeout_sec": 0, 00:16:05.941 "reconnect_delay_sec": 0, 00:16:05.941 "fast_io_fail_timeout_sec": 0, 00:16:05.941 "disable_auto_failback": false, 00:16:05.941 "generate_uuids": false, 00:16:05.941 "transport_tos": 0, 00:16:05.941 "nvme_error_stat": false, 00:16:05.941 "rdma_srq_size": 0, 00:16:05.941 "io_path_stat": false, 00:16:05.941 "allow_accel_sequence": false, 00:16:05.941 "rdma_max_cq_size": 0, 00:16:05.941 "rdma_cm_event_timeout_ms": 0, 00:16:05.941 "dhchap_digests": [ 00:16:05.941 "sha256", 00:16:05.941 "sha384", 00:16:05.941 "sha512" 00:16:05.941 ], 00:16:05.941 "dhchap_dhgroups": [ 00:16:05.941 "null", 00:16:05.941 "ffdhe2048", 00:16:05.941 "ffdhe3072", 00:16:05.941 "ffdhe4096", 00:16:05.941 "ffdhe6144", 00:16:05.941 "ffdhe8192" 00:16:05.941 ] 00:16:05.941 } 00:16:05.941 }, 00:16:05.941 { 00:16:05.941 "method": "bdev_nvme_set_hotplug", 00:16:05.941 "params": { 00:16:05.941 "period_us": 100000, 00:16:05.941 "enable": false 00:16:05.941 } 00:16:05.941 }, 00:16:05.941 { 00:16:05.941 "method": "bdev_malloc_create", 00:16:05.941 "params": { 00:16:05.941 "name": "malloc0", 00:16:05.941 "num_blocks": 8192, 00:16:05.941 "block_size": 4096, 00:16:05.941 "physical_block_size": 4096, 00:16:05.941 "uuid": "c2bd1cb4-b62c-425b-b6b7-beb7e75a17be", 00:16:05.941 "optimal_io_boundary": 0, 00:16:05.941 "md_size": 0, 00:16:05.941 "dif_type": 0, 00:16:05.941 "dif_is_head_of_md": false, 00:16:05.941 "dif_pi_format": 0 00:16:05.941 } 00:16:05.941 }, 00:16:05.941 { 00:16:05.941 "method": "bdev_wait_for_examine" 00:16:05.941 } 00:16:05.941 ] 00:16:05.941 }, 00:16:05.941 { 00:16:05.941 "subsystem": "nbd", 00:16:05.941 "config": [] 00:16:05.941 }, 00:16:05.941 { 00:16:05.941 "subsystem": "scheduler", 00:16:05.941 "config": [ 00:16:05.941 { 00:16:05.941 "method": "framework_set_scheduler", 00:16:05.941 "params": { 00:16:05.941 "name": "static" 00:16:05.941 } 00:16:05.941 } 00:16:05.941 ] 00:16:05.941 }, 00:16:05.941 { 00:16:05.941 "subsystem": "nvmf", 00:16:05.941 "config": [ 00:16:05.941 { 00:16:05.941 "method": "nvmf_set_config", 00:16:05.941 "params": { 00:16:05.941 "discovery_filter": "match_any", 00:16:05.941 "admin_cmd_passthru": { 00:16:05.941 "identify_ctrlr": false 00:16:05.941 }, 00:16:05.941 "dhchap_digests": [ 00:16:05.941 "sha256", 00:16:05.941 "sha384", 00:16:05.941 "sha512" 00:16:05.941 ], 00:16:05.941 "dhchap_dhgroups": [ 00:16:05.941 "null", 00:16:05.941 "ffdhe2048", 00:16:05.941 "ffdhe3072", 00:16:05.941 "ffdhe4096", 00:16:05.941 "ffdhe6144", 00:16:05.941 "ffdhe8192" 00:16:05.941 ] 00:16:05.942 } 00:16:05.942 }, 00:16:05.942 { 00:16:05.942 "method": "nvmf_set_max_subsystems", 00:16:05.942 "params": { 00:16:05.942 "max_subsystems": 1024 00:16:05.942 } 00:16:05.942 }, 00:16:05.942 { 00:16:05.942 "method": "nvmf_set_crdt", 00:16:05.942 "params": { 00:16:05.942 "crdt1": 0, 00:16:05.942 "crdt2": 0, 00:16:05.942 "crdt3": 0 00:16:05.942 } 00:16:05.942 }, 00:16:05.942 { 00:16:05.942 "method": "nvmf_create_transport", 00:16:05.942 "params": { 00:16:05.942 "trtype": "TCP", 00:16:05.942 "max_queue_depth": 128, 00:16:05.942 "max_io_qpairs_per_ctrlr": 127, 00:16:05.942 "in_capsule_data_size": 4096, 00:16:05.942 "max_io_size": 131072, 00:16:05.942 "io_unit_size": 131072, 00:16:05.942 "max_aq_depth": 128, 00:16:05.942 "num_shared_buffers": 511, 00:16:05.942 "buf_cache_size": 4294967295, 00:16:05.942 "dif_insert_or_strip": false, 00:16:05.942 "zcopy": false, 00:16:05.942 "c2h_success": false, 00:16:05.942 "sock_priority": 0, 00:16:05.942 "abort_timeout_sec": 1, 00:16:05.942 "ack_timeout": 0, 00:16:05.942 "data_wr_pool_size": 0 00:16:05.942 } 00:16:05.942 }, 00:16:05.942 { 00:16:05.942 "method": "nvmf_create_subsystem", 00:16:05.942 "params": { 00:16:05.942 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:05.942 "allow_any_host": false, 00:16:05.942 "serial_number": "00000000000000000000", 00:16:05.942 "model_number": "SPDK bdev Controller", 00:16:05.942 "max_namespaces": 32, 00:16:05.942 "min_cntlid": 1, 00:16:05.942 "max_cntlid": 65519, 00:16:05.942 "ana_reporting": false 00:16:05.942 } 00:16:05.942 }, 00:16:05.942 { 00:16:05.942 "method": "nvmf_subsystem_add_host", 00:16:05.942 "params": { 00:16:05.942 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:05.942 "host": "nqn.2016-06.io.spdk:host1", 00:16:05.942 "psk": "key0" 00:16:05.942 } 00:16:05.942 }, 00:16:05.942 { 00:16:05.942 "method": "nvmf_subsystem_add_ns", 00:16:05.942 "params": { 00:16:05.942 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:05.942 "namespace": { 00:16:05.942 "nsid": 1, 00:16:05.942 "bdev_name": "malloc0", 00:16:05.942 "nguid": "C2BD1CB4B62C425BB6B7BEB7E75A17BE", 00:16:05.942 "uuid": "c2bd1cb4-b62c-425b-b6b7-beb7e75a17be", 00:16:05.942 "no_auto_visible": false 00:16:05.942 } 00:16:05.942 } 00:16:05.942 }, 00:16:05.942 { 00:16:05.942 "method": "nvmf_subsystem_add_listener", 00:16:05.942 "params": { 00:16:05.942 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:05.942 "listen_address": { 00:16:05.942 "trtype": "TCP", 00:16:05.942 "adrfam": "IPv4", 00:16:05.942 "traddr": "10.0.0.3", 00:16:05.942 "trsvcid": "4420" 00:16:05.942 }, 00:16:05.942 "secure_channel": false, 00:16:05.942 "sock_impl": "ssl" 00:16:05.942 } 00:16:05.942 } 00:16:05.942 ] 00:16:05.942 } 00:16:05.942 ] 00:16:05.942 }' 00:16:05.942 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:06.200 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:16:06.200 "subsystems": [ 00:16:06.200 { 00:16:06.200 "subsystem": "keyring", 00:16:06.200 "config": [ 00:16:06.200 { 00:16:06.200 "method": "keyring_file_add_key", 00:16:06.200 "params": { 00:16:06.200 "name": "key0", 00:16:06.200 "path": "/tmp/tmp.ZeUWax8M3X" 00:16:06.200 } 00:16:06.200 } 00:16:06.200 ] 00:16:06.200 }, 00:16:06.200 { 00:16:06.200 "subsystem": "iobuf", 00:16:06.200 "config": [ 00:16:06.201 { 00:16:06.201 "method": "iobuf_set_options", 00:16:06.201 "params": { 00:16:06.201 "small_pool_count": 8192, 00:16:06.201 "large_pool_count": 1024, 00:16:06.201 "small_bufsize": 8192, 00:16:06.201 "large_bufsize": 135168, 00:16:06.201 "enable_numa": false 00:16:06.201 } 00:16:06.201 } 00:16:06.201 ] 00:16:06.201 }, 00:16:06.201 { 00:16:06.201 "subsystem": "sock", 00:16:06.201 "config": [ 00:16:06.201 { 00:16:06.201 "method": "sock_set_default_impl", 00:16:06.201 "params": { 00:16:06.201 "impl_name": "uring" 00:16:06.201 } 00:16:06.201 }, 00:16:06.201 { 00:16:06.201 "method": "sock_impl_set_options", 00:16:06.201 "params": { 00:16:06.201 "impl_name": "ssl", 00:16:06.201 "recv_buf_size": 4096, 00:16:06.201 "send_buf_size": 4096, 00:16:06.201 "enable_recv_pipe": true, 00:16:06.201 "enable_quickack": false, 00:16:06.201 "enable_placement_id": 0, 00:16:06.201 "enable_zerocopy_send_server": true, 00:16:06.201 "enable_zerocopy_send_client": false, 00:16:06.201 "zerocopy_threshold": 0, 00:16:06.201 "tls_version": 0, 00:16:06.201 "enable_ktls": false 00:16:06.201 } 00:16:06.201 }, 00:16:06.201 { 00:16:06.201 "method": "sock_impl_set_options", 00:16:06.201 "params": { 00:16:06.201 "impl_name": "posix", 00:16:06.201 "recv_buf_size": 2097152, 00:16:06.201 "send_buf_size": 2097152, 00:16:06.201 "enable_recv_pipe": true, 00:16:06.201 "enable_quickack": false, 00:16:06.201 "enable_placement_id": 0, 00:16:06.201 "enable_zerocopy_send_server": true, 00:16:06.201 "enable_zerocopy_send_client": false, 00:16:06.201 "zerocopy_threshold": 0, 00:16:06.201 "tls_version": 0, 00:16:06.201 "enable_ktls": false 00:16:06.201 } 00:16:06.201 }, 00:16:06.201 { 00:16:06.201 "method": "sock_impl_set_options", 00:16:06.201 "params": { 00:16:06.201 "impl_name": "uring", 00:16:06.201 "recv_buf_size": 2097152, 00:16:06.201 "send_buf_size": 2097152, 00:16:06.201 "enable_recv_pipe": true, 00:16:06.201 "enable_quickack": false, 00:16:06.201 "enable_placement_id": 0, 00:16:06.201 "enable_zerocopy_send_server": false, 00:16:06.201 "enable_zerocopy_send_client": false, 00:16:06.201 "zerocopy_threshold": 0, 00:16:06.201 "tls_version": 0, 00:16:06.201 "enable_ktls": false 00:16:06.201 } 00:16:06.201 } 00:16:06.201 ] 00:16:06.201 }, 00:16:06.201 { 00:16:06.201 "subsystem": "vmd", 00:16:06.201 "config": [] 00:16:06.201 }, 00:16:06.201 { 00:16:06.201 "subsystem": "accel", 00:16:06.201 "config": [ 00:16:06.201 { 00:16:06.201 "method": "accel_set_options", 00:16:06.201 "params": { 00:16:06.201 "small_cache_size": 128, 00:16:06.201 "large_cache_size": 16, 00:16:06.201 "task_count": 2048, 00:16:06.201 "sequence_count": 2048, 00:16:06.201 "buf_count": 2048 00:16:06.201 } 00:16:06.201 } 00:16:06.201 ] 00:16:06.201 }, 00:16:06.201 { 00:16:06.201 "subsystem": "bdev", 00:16:06.201 "config": [ 00:16:06.201 { 00:16:06.201 "method": "bdev_set_options", 00:16:06.201 "params": { 00:16:06.201 "bdev_io_pool_size": 65535, 00:16:06.201 "bdev_io_cache_size": 256, 00:16:06.201 "bdev_auto_examine": true, 00:16:06.201 "iobuf_small_cache_size": 128, 00:16:06.201 "iobuf_large_cache_size": 16 00:16:06.201 } 00:16:06.201 }, 00:16:06.201 { 00:16:06.201 "method": "bdev_raid_set_options", 00:16:06.201 "params": { 00:16:06.201 "process_window_size_kb": 1024, 00:16:06.201 "process_max_bandwidth_mb_sec": 0 00:16:06.201 } 00:16:06.201 }, 00:16:06.201 { 00:16:06.201 "method": "bdev_iscsi_set_options", 00:16:06.201 "params": { 00:16:06.201 "timeout_sec": 30 00:16:06.201 } 00:16:06.201 }, 00:16:06.201 { 00:16:06.201 "method": "bdev_nvme_set_options", 00:16:06.201 "params": { 00:16:06.201 "action_on_timeout": "none", 00:16:06.201 "timeout_us": 0, 00:16:06.201 "timeout_admin_us": 0, 00:16:06.201 "keep_alive_timeout_ms": 10000, 00:16:06.201 "arbitration_burst": 0, 00:16:06.201 "low_priority_weight": 0, 00:16:06.201 "medium_priority_weight": 0, 00:16:06.201 "high_priority_weight": 0, 00:16:06.201 "nvme_adminq_poll_period_us": 10000, 00:16:06.201 "nvme_ioq_poll_period_us": 0, 00:16:06.201 "io_queue_requests": 512, 00:16:06.201 "delay_cmd_submit": true, 00:16:06.201 "transport_retry_count": 4, 00:16:06.201 "bdev_retry_count": 3, 00:16:06.201 "transport_ack_timeout": 0, 00:16:06.201 "ctrlr_loss_timeout_sec": 0, 00:16:06.201 "reconnect_delay_sec": 0, 00:16:06.201 "fast_io_fail_timeout_sec": 0, 00:16:06.201 "disable_auto_failback": false, 00:16:06.201 "generate_uuids": false, 00:16:06.201 "transport_tos": 0, 00:16:06.201 "nvme_error_stat": false, 00:16:06.201 "rdma_srq_size": 0, 00:16:06.201 "io_path_stat": false, 00:16:06.201 "allow_accel_sequence": false, 00:16:06.201 "rdma_max_cq_size": 0, 00:16:06.201 "rdma_cm_event_timeout_ms": 0, 00:16:06.201 "dhchap_digests": [ 00:16:06.201 "sha256", 00:16:06.201 "sha384", 00:16:06.201 "sha512" 00:16:06.201 ], 00:16:06.201 "dhchap_dhgroups": [ 00:16:06.201 "null", 00:16:06.201 "ffdhe2048", 00:16:06.201 "ffdhe3072", 00:16:06.201 "ffdhe4096", 00:16:06.201 "ffdhe6144", 00:16:06.201 "ffdhe8192" 00:16:06.201 ] 00:16:06.201 } 00:16:06.201 }, 00:16:06.201 { 00:16:06.201 "method": "bdev_nvme_attach_controller", 00:16:06.201 "params": { 00:16:06.201 "name": "nvme0", 00:16:06.201 "trtype": "TCP", 00:16:06.201 "adrfam": "IPv4", 00:16:06.201 "traddr": "10.0.0.3", 00:16:06.201 "trsvcid": "4420", 00:16:06.201 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:06.201 "prchk_reftag": false, 00:16:06.201 "prchk_guard": false, 00:16:06.201 "ctrlr_loss_timeout_sec": 0, 00:16:06.201 "reconnect_delay_sec": 0, 00:16:06.201 "fast_io_fail_timeout_sec": 0, 00:16:06.201 "psk": "key0", 00:16:06.201 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:06.201 "hdgst": false, 00:16:06.201 "ddgst": false, 00:16:06.201 "multipath": "multipath" 00:16:06.201 } 00:16:06.201 }, 00:16:06.201 { 00:16:06.201 "method": "bdev_nvme_set_hotplug", 00:16:06.201 "params": { 00:16:06.201 "period_us": 100000, 00:16:06.201 "enable": false 00:16:06.201 } 00:16:06.201 }, 00:16:06.201 { 00:16:06.201 "method": "bdev_enable_histogram", 00:16:06.201 "params": { 00:16:06.201 "name": "nvme0n1", 00:16:06.201 "enable": true 00:16:06.201 } 00:16:06.201 }, 00:16:06.201 { 00:16:06.201 "method": "bdev_wait_for_examine" 00:16:06.201 } 00:16:06.201 ] 00:16:06.201 }, 00:16:06.201 { 00:16:06.201 "subsystem": "nbd", 00:16:06.201 "config": [] 00:16:06.201 } 00:16:06.201 ] 00:16:06.201 }' 00:16:06.201 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72400 00:16:06.201 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72400 ']' 00:16:06.201 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72400 00:16:06.201 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:06.201 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.201 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72400 00:16:06.201 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:06.201 killing process with pid 72400 00:16:06.201 Received shutdown signal, test time was about 1.000000 seconds 00:16:06.201 00:16:06.201 Latency(us) 00:16:06.201 [2024-11-19T10:10:20.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.201 [2024-11-19T10:10:20.091Z] =================================================================================================================== 00:16:06.202 [2024-11-19T10:10:20.091Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:06.202 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:06.202 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72400' 00:16:06.202 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72400 00:16:06.202 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72400 00:16:06.460 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72375 00:16:06.460 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72375 ']' 00:16:06.460 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72375 00:16:06.460 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:06.460 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.460 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72375 00:16:06.460 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:06.460 killing process with pid 72375 00:16:06.460 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:06.460 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72375' 00:16:06.460 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72375 00:16:06.460 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72375 00:16:06.719 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:16:06.719 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:16:06.719 "subsystems": [ 00:16:06.719 { 00:16:06.719 "subsystem": "keyring", 00:16:06.719 "config": [ 00:16:06.719 { 00:16:06.719 "method": "keyring_file_add_key", 00:16:06.719 "params": { 00:16:06.719 "name": "key0", 00:16:06.719 "path": "/tmp/tmp.ZeUWax8M3X" 00:16:06.719 } 00:16:06.719 } 00:16:06.719 ] 00:16:06.719 }, 00:16:06.719 { 00:16:06.719 "subsystem": "iobuf", 00:16:06.719 "config": [ 00:16:06.719 { 00:16:06.719 "method": "iobuf_set_options", 00:16:06.719 "params": { 00:16:06.719 "small_pool_count": 8192, 00:16:06.719 "large_pool_count": 1024, 00:16:06.719 "small_bufsize": 8192, 00:16:06.719 "large_bufsize": 135168, 00:16:06.719 "enable_numa": false 00:16:06.719 } 00:16:06.719 } 00:16:06.719 ] 00:16:06.719 }, 00:16:06.719 { 00:16:06.719 "subsystem": "sock", 00:16:06.719 "config": [ 00:16:06.719 { 00:16:06.719 "method": "sock_set_default_impl", 00:16:06.719 "params": { 00:16:06.719 "impl_name": "uring" 00:16:06.719 } 00:16:06.719 }, 00:16:06.719 { 00:16:06.719 "method": "sock_impl_set_options", 00:16:06.719 "params": { 00:16:06.719 "impl_name": "ssl", 00:16:06.719 "recv_buf_size": 4096, 00:16:06.719 "send_buf_size": 4096, 00:16:06.719 "enable_recv_pipe": true, 00:16:06.719 "enable_quickack": false, 00:16:06.719 "enable_placement_id": 0, 00:16:06.719 "enable_zerocopy_send_server": true, 00:16:06.719 "enable_zerocopy_send_client": false, 00:16:06.719 "zerocopy_threshold": 0, 00:16:06.719 "tls_version": 0, 00:16:06.719 "enable_ktls": false 00:16:06.719 } 00:16:06.719 }, 00:16:06.719 { 00:16:06.719 "method": "sock_impl_set_options", 00:16:06.719 "params": { 00:16:06.719 "impl_name": "posix", 00:16:06.719 "recv_buf_size": 2097152, 00:16:06.720 "send_buf_size": 2097152, 00:16:06.720 "enable_recv_pipe": true, 00:16:06.720 "enable_quickack": false, 00:16:06.720 "enable_placement_id": 0, 00:16:06.720 "enable_zerocopy_send_server": true, 00:16:06.720 "enable_zerocopy_send_client": false, 00:16:06.720 "zerocopy_threshold": 0, 00:16:06.720 "tls_version": 0, 00:16:06.720 "enable_ktls": false 00:16:06.720 } 00:16:06.720 }, 00:16:06.720 { 00:16:06.720 "method": "sock_impl_set_options", 00:16:06.720 "params": { 00:16:06.720 "impl_name": "uring", 00:16:06.720 "recv_buf_size": 2097152, 00:16:06.720 "send_buf_size": 2097152, 00:16:06.720 "enable_recv_pipe": true, 00:16:06.720 "enable_quickack": false, 00:16:06.720 "enable_placement_id": 0, 00:16:06.720 "enable_zerocopy_send_server": false, 00:16:06.720 "enable_zerocopy_send_client": false, 00:16:06.720 "zerocopy_threshold": 0, 00:16:06.720 "tls_version": 0, 00:16:06.720 "enable_ktls": false 00:16:06.720 } 00:16:06.720 } 00:16:06.720 ] 00:16:06.720 }, 00:16:06.720 { 00:16:06.720 "subsystem": "vmd", 00:16:06.720 "config": [] 00:16:06.720 }, 00:16:06.720 { 00:16:06.720 "subsystem": "accel", 00:16:06.720 "config": [ 00:16:06.720 { 00:16:06.720 "method": "accel_set_options", 00:16:06.720 "params": { 00:16:06.720 "small_cache_size": 128, 00:16:06.720 "large_cache_size": 16, 00:16:06.720 "task_count": 2048, 00:16:06.720 "sequence_count": 2048, 00:16:06.720 "buf_count": 2048 00:16:06.720 } 00:16:06.720 } 00:16:06.720 ] 00:16:06.720 }, 00:16:06.720 { 00:16:06.720 "subsystem": "bdev", 00:16:06.720 "config": [ 00:16:06.720 { 00:16:06.720 "method": "bdev_set_options", 00:16:06.720 "params": { 00:16:06.720 "bdev_io_pool_size": 65535, 00:16:06.720 "bdev_io_cache_size": 256, 00:16:06.720 "bdev_auto_examine": true, 00:16:06.720 "iobuf_small_cache_size": 128, 00:16:06.720 "iobuf_large_cache_size": 16 00:16:06.720 } 00:16:06.720 }, 00:16:06.720 { 00:16:06.720 "method": "bdev_raid_set_options", 00:16:06.720 "params": { 00:16:06.720 "process_window_size_kb": 1024, 00:16:06.720 "process_max_bandwidth_mb_sec": 0 00:16:06.720 } 00:16:06.720 }, 00:16:06.720 { 00:16:06.720 "method": "bdev_iscsi_set_options", 00:16:06.720 "params": { 00:16:06.720 "timeout_sec": 30 00:16:06.720 } 00:16:06.720 }, 00:16:06.720 { 00:16:06.720 "method": "bdev_nvme_set_options", 00:16:06.720 "params": { 00:16:06.720 "action_on_timeout": "none", 00:16:06.720 "timeout_us": 0, 00:16:06.720 "timeout_admin_us": 0, 00:16:06.720 "keep_alive_timeout_ms": 10000, 00:16:06.720 "arbitration_burst": 0, 00:16:06.720 "low_priority_weight": 0, 00:16:06.720 "medium_priority_weight": 0, 00:16:06.720 "high_priority_weight": 0, 00:16:06.720 "nvme_adminq_poll_period_us": 10000, 00:16:06.720 "nvme_ioq_poll_period_us": 0, 00:16:06.720 "io_queue_requests": 0, 00:16:06.720 "delay_cmd_submit": true, 00:16:06.720 "transport_retry_count": 4, 00:16:06.720 "bdev_retry_count": 3, 00:16:06.720 "transport_ack_timeout": 0, 00:16:06.720 "ctrlr_loss_timeout_sec": 0, 00:16:06.720 "reconnect_delay_sec": 0, 00:16:06.720 "fast_io_fail_timeout_sec": 0, 00:16:06.720 "disable_auto_failback": false, 00:16:06.720 "generate_uuids": false, 00:16:06.720 "transport_tos": 0, 00:16:06.720 "nvme_error_stat": false, 00:16:06.720 "rdma_srq_size": 0, 00:16:06.720 "io_path_stat": false, 00:16:06.720 "allow_accel_sequence": false, 00:16:06.720 "rdma_max_cq_size": 0, 00:16:06.720 "rdma_cm_event_timeout_ms": 0, 00:16:06.720 "dhchap_digests": [ 00:16:06.720 "sha256", 00:16:06.720 "sha384", 00:16:06.720 "sha512" 00:16:06.720 ], 00:16:06.720 "dhchap_dhgroups": [ 00:16:06.720 "null", 00:16:06.720 "ffdhe2048", 00:16:06.720 "ffdhe3072", 00:16:06.720 "ffdhe4096", 00:16:06.720 "ffdhe6144", 00:16:06.720 "ffdhe8192" 00:16:06.720 ] 00:16:06.720 } 00:16:06.720 }, 00:16:06.720 { 00:16:06.720 "method": "bdev_nvme_set_hotplug", 00:16:06.720 "params": { 00:16:06.720 "period_us": 100000, 00:16:06.720 "enable": false 00:16:06.720 } 00:16:06.720 }, 00:16:06.720 { 00:16:06.720 "method": "bdev_malloc_create", 00:16:06.720 "params": { 00:16:06.720 "name": "malloc0", 00:16:06.720 "num_blocks": 8192, 00:16:06.720 "block_size": 4096, 00:16:06.720 "physical_block_size": 4096, 00:16:06.720 "uuid": "c2bd1cb4-b62c-425b-b6b7-beb7e75a17be", 00:16:06.720 "optimal_io_boundary": 0, 00:16:06.720 "md_size": 0, 00:16:06.720 "dif_type": 0, 00:16:06.720 "dif_is_head_of_md": false, 00:16:06.720 "dif_pi_format": 0 00:16:06.720 } 00:16:06.720 }, 00:16:06.720 { 00:16:06.720 "method": "bdev_wait_for_examine" 00:16:06.720 } 00:16:06.720 ] 00:16:06.720 }, 00:16:06.720 { 00:16:06.720 "subsystem": "nbd", 00:16:06.720 "config": [] 00:16:06.720 }, 00:16:06.720 { 00:16:06.720 "subsystem": "scheduler", 00:16:06.720 "config": [ 00:16:06.720 { 00:16:06.720 "method": "framework_set_scheduler", 00:16:06.720 "params": { 00:16:06.720 "name": "static" 00:16:06.720 } 00:16:06.720 } 00:16:06.720 ] 00:16:06.720 }, 00:16:06.720 { 00:16:06.720 "subsystem": "nvmf", 00:16:06.720 "config": [ 00:16:06.720 { 00:16:06.720 "method": "nvmf_set_config", 00:16:06.720 "params": { 00:16:06.720 "discovery_filter": "match_any", 00:16:06.720 "admin_cmd_passthru": { 00:16:06.720 "identify_ctrlr": false 00:16:06.720 }, 00:16:06.720 "dhchap_digests": [ 00:16:06.720 "sha256", 00:16:06.720 "sha384", 00:16:06.720 "sha512" 00:16:06.720 ], 00:16:06.720 "dhchap_dhgroups": [ 00:16:06.720 "null", 00:16:06.720 "ffdhe2048", 00:16:06.720 "ffdhe3072", 00:16:06.720 "ffdhe4096", 00:16:06.720 "ffdhe6144", 00:16:06.720 "ffdhe8192" 00:16:06.720 ] 00:16:06.720 } 00:16:06.720 }, 00:16:06.720 { 00:16:06.720 "method": "nvmf_set_max_subsystems", 00:16:06.720 "params": { 00:16:06.720 "max_subsystems": 1024 00:16:06.720 } 00:16:06.720 }, 00:16:06.720 { 00:16:06.720 "method": "nvmf_set_crdt", 00:16:06.720 "params": { 00:16:06.720 "crdt1": 0, 00:16:06.720 "crdt2": 0, 00:16:06.720 "crdt3": 0 00:16:06.720 } 00:16:06.720 }, 00:16:06.720 { 00:16:06.720 "method": "nvmf_create_transport", 00:16:06.720 "params": { 00:16:06.720 "trtype": "TCP", 00:16:06.720 "max_queue_depth": 128, 00:16:06.720 "max_io_qpairs_per_ctrlr": 127, 00:16:06.720 "in_capsule_data_size": 4096, 00:16:06.720 "max_io_size": 131072, 00:16:06.720 "io_unit_size": 131072, 00:16:06.720 "max_aq_depth": 128, 00:16:06.720 "num_shared_buffers": 511, 00:16:06.720 "buf_cache_size": 4294967295, 00:16:06.720 "dif_insert_or_strip": false, 00:16:06.720 "zcopy": false, 00:16:06.720 "c2h_success": false, 00:16:06.720 "sock_priority": 0, 00:16:06.720 "abort_timeout_sec": 1, 00:16:06.720 "ack_timeout": 0, 00:16:06.720 "data_wr_pool_size": 0 00:16:06.720 } 00:16:06.720 }, 00:16:06.720 { 00:16:06.720 "method": "nvmf_create_subsystem", 00:16:06.720 "params": { 00:16:06.720 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:06.720 "allow_any_host": false, 00:16:06.720 "serial_number": "00000000000000000000", 00:16:06.720 "model_number": "SPDK bdev Controller", 00:16:06.720 "max_namespaces": 32, 00:16:06.720 "min_cntlid": 1, 00:16:06.720 "max_cntlid": 65519, 00:16:06.720 "ana_reporting": false 00:16:06.720 } 00:16:06.720 }, 00:16:06.720 { 00:16:06.720 "method": "nvmf_subsystem_add_host", 00:16:06.720 "params": { 00:16:06.720 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:06.720 "host": "nqn.2016-06.io.spdk:host1", 00:16:06.720 "psk": "key0" 00:16:06.720 } 00:16:06.720 }, 00:16:06.720 { 00:16:06.720 "method": "nvmf_subsystem_add_ns", 00:16:06.720 "params": { 00:16:06.720 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:06.720 "namespace": { 00:16:06.720 "nsid": 1, 00:16:06.720 "bdev_name": "malloc0", 00:16:06.720 "nguid": "C2BD1CB4B62C425BB6B7BEB7E75A17BE", 00:16:06.720 "uuid": "c2bd1cb4-b62c-425b-b6b7-beb7e75a17be", 00:16:06.720 "no_auto_visible": false 00:16:06.720 } 00:16:06.720 } 00:16:06.720 }, 00:16:06.720 { 00:16:06.720 "method": "nvmf_subsystem_add_listener", 00:16:06.720 "params": { 00:16:06.720 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:06.720 "listen_address": { 00:16:06.720 "trtype": "TCP", 00:16:06.720 "adrfam": "IPv4", 00:16:06.720 "traddr": "10.0.0.3", 00:16:06.720 "trsvcid": "4420" 00:16:06.720 }, 00:16:06.720 "secure_channel": false, 00:16:06.720 "sock_impl": "ssl" 00:16:06.720 } 00:16:06.720 } 00:16:06.720 ] 00:16:06.720 } 00:16:06.720 ] 00:16:06.720 }' 00:16:06.720 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:06.720 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:06.720 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:06.720 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72453 00:16:06.720 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72453 00:16:06.721 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72453 ']' 00:16:06.721 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:06.721 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.721 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:06.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.721 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.721 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:06.721 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:06.721 [2024-11-19 10:10:20.516696] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:16:06.721 [2024-11-19 10:10:20.516794] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.979 [2024-11-19 10:10:20.663883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.979 [2024-11-19 10:10:20.738738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.979 [2024-11-19 10:10:20.738809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.979 [2024-11-19 10:10:20.738828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:06.979 [2024-11-19 10:10:20.738843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:06.979 [2024-11-19 10:10:20.738855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.979 [2024-11-19 10:10:20.739399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.237 [2024-11-19 10:10:20.911415] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:07.237 [2024-11-19 10:10:20.994229] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:07.237 [2024-11-19 10:10:21.026160] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:07.237 [2024-11-19 10:10:21.026371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:07.802 10:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:07.802 10:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:07.802 10:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:07.802 10:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:07.802 10:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:07.802 10:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:07.802 10:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72485 00:16:07.802 10:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72485 /var/tmp/bdevperf.sock 00:16:07.802 10:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72485 ']' 00:16:07.802 10:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:16:07.802 10:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:07.802 10:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:07.802 10:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:07.802 10:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:07.802 10:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:07.802 10:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:16:07.802 "subsystems": [ 00:16:07.802 { 00:16:07.802 "subsystem": "keyring", 00:16:07.802 "config": [ 00:16:07.802 { 00:16:07.802 "method": "keyring_file_add_key", 00:16:07.802 "params": { 00:16:07.802 "name": "key0", 00:16:07.802 "path": "/tmp/tmp.ZeUWax8M3X" 00:16:07.802 } 00:16:07.802 } 00:16:07.802 ] 00:16:07.802 }, 00:16:07.802 { 00:16:07.802 "subsystem": "iobuf", 00:16:07.802 "config": [ 00:16:07.802 { 00:16:07.802 "method": "iobuf_set_options", 00:16:07.802 "params": { 00:16:07.802 "small_pool_count": 8192, 00:16:07.802 "large_pool_count": 1024, 00:16:07.802 "small_bufsize": 8192, 00:16:07.802 "large_bufsize": 135168, 00:16:07.802 "enable_numa": false 00:16:07.802 } 00:16:07.802 } 00:16:07.802 ] 00:16:07.802 }, 00:16:07.802 { 00:16:07.802 "subsystem": "sock", 00:16:07.802 "config": [ 00:16:07.802 { 00:16:07.802 "method": "sock_set_default_impl", 00:16:07.802 "params": { 00:16:07.802 "impl_name": "uring" 00:16:07.802 } 00:16:07.802 }, 00:16:07.802 { 00:16:07.802 "method": "sock_impl_set_options", 00:16:07.802 "params": { 00:16:07.802 "impl_name": "ssl", 00:16:07.802 "recv_buf_size": 4096, 00:16:07.802 "send_buf_size": 4096, 00:16:07.802 "enable_recv_pipe": true, 00:16:07.802 "enable_quickack": false, 00:16:07.802 "enable_placement_id": 0, 00:16:07.802 "enable_zerocopy_send_server": true, 00:16:07.802 "enable_zerocopy_send_client": false, 00:16:07.802 "zerocopy_threshold": 0, 00:16:07.802 "tls_version": 0, 00:16:07.802 "enable_ktls": false 00:16:07.803 } 00:16:07.803 }, 00:16:07.803 { 00:16:07.803 "method": "sock_impl_set_options", 00:16:07.803 "params": { 00:16:07.803 "impl_name": "posix", 00:16:07.803 "recv_buf_size": 2097152, 00:16:07.803 "send_buf_size": 2097152, 00:16:07.803 "enable_recv_pipe": true, 00:16:07.803 "enable_quickack": false, 00:16:07.803 "enable_placement_id": 0, 00:16:07.803 "enable_zerocopy_send_server": true, 00:16:07.803 "enable_zerocopy_send_client": false, 00:16:07.803 "zerocopy_threshold": 0, 00:16:07.803 "tls_version": 0, 00:16:07.803 "enable_ktls": false 00:16:07.803 } 00:16:07.803 }, 00:16:07.803 { 00:16:07.803 "method": "sock_impl_set_options", 00:16:07.803 "params": { 00:16:07.803 "impl_name": "uring", 00:16:07.803 "recv_buf_size": 2097152, 00:16:07.803 "send_buf_size": 2097152, 00:16:07.803 "enable_recv_pipe": true, 00:16:07.803 "enable_quickack": false, 00:16:07.803 "enable_placement_id": 0, 00:16:07.803 "enable_zerocopy_send_server": false, 00:16:07.803 "enable_zerocopy_send_client": false, 00:16:07.803 "zerocopy_threshold": 0, 00:16:07.803 "tls_version": 0, 00:16:07.803 "enable_ktls": false 00:16:07.803 } 00:16:07.803 } 00:16:07.803 ] 00:16:07.803 }, 00:16:07.803 { 00:16:07.803 "subsystem": "vmd", 00:16:07.803 "config": [] 00:16:07.803 }, 00:16:07.803 { 00:16:07.803 "subsystem": "accel", 00:16:07.803 "config": [ 00:16:07.803 { 00:16:07.803 "method": "accel_set_options", 00:16:07.803 "params": { 00:16:07.803 "small_cache_size": 128, 00:16:07.803 "large_cache_size": 16, 00:16:07.803 "task_count": 2048, 00:16:07.803 "sequence_count": 2048, 00:16:07.803 "buf_count": 2048 00:16:07.803 } 00:16:07.803 } 00:16:07.803 ] 00:16:07.803 }, 00:16:07.803 { 00:16:07.803 "subsystem": "bdev", 00:16:07.803 "config": [ 00:16:07.803 { 00:16:07.803 "method": "bdev_set_options", 00:16:07.803 "params": { 00:16:07.803 "bdev_io_pool_size": 65535, 00:16:07.803 "bdev_io_cache_size": 256, 00:16:07.803 "bdev_auto_examine": true, 00:16:07.803 "iobuf_small_cache_size": 128, 00:16:07.803 "iobuf_large_cache_size": 16 00:16:07.803 } 00:16:07.803 }, 00:16:07.803 { 00:16:07.803 "method": "bdev_raid_set_options", 00:16:07.803 "params": { 00:16:07.803 "process_window_size_kb": 1024, 00:16:07.803 "process_max_bandwidth_mb_sec": 0 00:16:07.803 } 00:16:07.803 }, 00:16:07.803 { 00:16:07.803 "method": "bdev_iscsi_set_options", 00:16:07.803 "params": { 00:16:07.803 "timeout_sec": 30 00:16:07.803 } 00:16:07.803 }, 00:16:07.803 { 00:16:07.803 "method": "bdev_nvme_set_options", 00:16:07.803 "params": { 00:16:07.803 "action_on_timeout": "none", 00:16:07.803 "timeout_us": 0, 00:16:07.803 "timeout_admin_us": 0, 00:16:07.803 "keep_alive_timeout_ms": 10000, 00:16:07.803 "arbitration_burst": 0, 00:16:07.803 "low_priority_weight": 0, 00:16:07.803 "medium_priority_weight": 0, 00:16:07.803 "high_priority_weight": 0, 00:16:07.803 "nvme_adminq_poll_period_us": 10000, 00:16:07.803 "nvme_ioq_poll_period_us": 0, 00:16:07.803 "io_queue_requests": 512, 00:16:07.803 "delay_cmd_submit": true, 00:16:07.803 "transport_retry_count": 4, 00:16:07.803 "bdev_retry_count": 3, 00:16:07.803 "transport_ack_timeout": 0, 00:16:07.803 "ctrlr_loss_timeout_sec": 0, 00:16:07.803 "reconnect_delay_sec": 0, 00:16:07.803 "fast_io_fail_timeout_sec": 0, 00:16:07.803 "disable_auto_failback": false, 00:16:07.803 "generate_uuids": false, 00:16:07.803 "transport_tos": 0, 00:16:07.803 "nvme_error_stat": false, 00:16:07.803 "rdma_srq_size": 0, 00:16:07.803 "io_path_stat": false, 00:16:07.803 "allow_accel_sequence": false, 00:16:07.803 "rdma_max_cq_size": 0, 00:16:07.803 "rdma_cm_event_timeout_ms": 0, 00:16:07.803 "dhchap_digests": [ 00:16:07.803 "sha256", 00:16:07.803 "sha384", 00:16:07.803 "sha512" 00:16:07.803 ], 00:16:07.803 "dhchap_dhgroups": [ 00:16:07.803 "null", 00:16:07.803 "ffdhe2048", 00:16:07.803 "ffdhe3072", 00:16:07.803 "ffdhe4096", 00:16:07.803 "ffdhe6144", 00:16:07.803 "ffdhe8192" 00:16:07.803 ] 00:16:07.803 } 00:16:07.803 }, 00:16:07.803 { 00:16:07.803 "method": "bdev_nvme_attach_controller", 00:16:07.803 "params": { 00:16:07.803 "name": "nvme0", 00:16:07.803 "trtype": "TCP", 00:16:07.803 "adrfam": "IPv4", 00:16:07.803 "traddr": "10.0.0.3", 00:16:07.803 "trsvcid": "4420", 00:16:07.803 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:07.803 "prchk_reftag": false, 00:16:07.803 "prchk_guard": false, 00:16:07.803 "ctrlr_loss_timeout_sec": 0, 00:16:07.803 "reconnect_delay_sec": 0, 00:16:07.803 "fast_io_fail_timeout_sec": 0, 00:16:07.803 "psk": "key0", 00:16:07.803 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:07.803 "hdgst": false, 00:16:07.803 "ddgst": false, 00:16:07.803 "multipath": "multipath" 00:16:07.803 } 00:16:07.803 }, 00:16:07.803 { 00:16:07.803 "method": "bdev_nvme_set_hotplug", 00:16:07.803 "params": { 00:16:07.803 "period_us": 100000, 00:16:07.803 "enable": false 00:16:07.803 } 00:16:07.803 }, 00:16:07.803 { 00:16:07.803 "method": "bdev_enable_histogram", 00:16:07.803 "params": { 00:16:07.803 "name": "nvme0n1", 00:16:07.803 "enable": true 00:16:07.803 } 00:16:07.803 }, 00:16:07.803 { 00:16:07.803 "method": "bdev_wait_for_examine" 00:16:07.803 } 00:16:07.803 ] 00:16:07.803 }, 00:16:07.803 { 00:16:07.803 "subsystem": "nbd", 00:16:07.803 "config": [] 00:16:07.803 } 00:16:07.803 ] 00:16:07.803 }' 00:16:07.803 [2024-11-19 10:10:21.617224] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:16:07.803 [2024-11-19 10:10:21.617328] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72485 ] 00:16:08.062 [2024-11-19 10:10:21.770515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.062 [2024-11-19 10:10:21.840247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.320 [2024-11-19 10:10:21.979874] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:08.320 [2024-11-19 10:10:22.031984] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:08.888 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:08.888 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:08.888 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:08.888 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:16:09.454 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.454 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:09.454 Running I/O for 1 seconds... 00:16:10.405 3885.00 IOPS, 15.18 MiB/s 00:16:10.405 Latency(us) 00:16:10.405 [2024-11-19T10:10:24.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.405 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:10.405 Verification LBA range: start 0x0 length 0x2000 00:16:10.405 nvme0n1 : 1.02 3923.71 15.33 0.00 0.00 32175.12 5272.67 21448.15 00:16:10.405 [2024-11-19T10:10:24.294Z] =================================================================================================================== 00:16:10.405 [2024-11-19T10:10:24.294Z] Total : 3923.71 15.33 0.00 0.00 32175.12 5272.67 21448.15 00:16:10.405 { 00:16:10.405 "results": [ 00:16:10.405 { 00:16:10.405 "job": "nvme0n1", 00:16:10.405 "core_mask": "0x2", 00:16:10.405 "workload": "verify", 00:16:10.405 "status": "finished", 00:16:10.405 "verify_range": { 00:16:10.405 "start": 0, 00:16:10.405 "length": 8192 00:16:10.405 }, 00:16:10.405 "queue_depth": 128, 00:16:10.405 "io_size": 4096, 00:16:10.405 "runtime": 1.022757, 00:16:10.405 "iops": 3923.7081731046574, 00:16:10.405 "mibps": 15.326985051190068, 00:16:10.405 "io_failed": 0, 00:16:10.405 "io_timeout": 0, 00:16:10.405 "avg_latency_us": 32175.124344063606, 00:16:10.405 "min_latency_us": 5272.669090909091, 00:16:10.405 "max_latency_us": 21448.145454545454 00:16:10.405 } 00:16:10.405 ], 00:16:10.405 "core_count": 1 00:16:10.405 } 00:16:10.405 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:16:10.405 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:16:10.405 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:16:10.405 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:16:10.405 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:16:10.405 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:16:10.405 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:10.405 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:16:10.405 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:16:10.405 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:16:10.405 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:10.405 nvmf_trace.0 00:16:10.664 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:16:10.664 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72485 00:16:10.664 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72485 ']' 00:16:10.664 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72485 00:16:10.664 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:10.664 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:10.664 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72485 00:16:10.664 killing process with pid 72485 00:16:10.664 Received shutdown signal, test time was about 1.000000 seconds 00:16:10.664 00:16:10.664 Latency(us) 00:16:10.664 [2024-11-19T10:10:24.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.664 [2024-11-19T10:10:24.553Z] =================================================================================================================== 00:16:10.664 [2024-11-19T10:10:24.553Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:10.664 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:10.664 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:10.664 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72485' 00:16:10.664 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72485 00:16:10.664 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72485 00:16:10.664 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:16:10.664 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:10.664 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:16:10.922 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:10.922 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:16:10.922 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:10.922 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:10.922 rmmod nvme_tcp 00:16:10.922 rmmod nvme_fabrics 00:16:10.922 rmmod nvme_keyring 00:16:10.922 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:10.922 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:16:10.922 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:16:10.922 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72453 ']' 00:16:10.922 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72453 00:16:10.922 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72453 ']' 00:16:10.922 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72453 00:16:10.922 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:10.922 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:10.922 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72453 00:16:10.922 killing process with pid 72453 00:16:10.922 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:10.922 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:10.922 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72453' 00:16:10.922 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72453 00:16:10.922 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72453 00:16:11.181 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:11.181 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:11.181 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:11.181 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:16:11.181 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:16:11.181 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:11.181 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:16:11.181 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:11.181 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:11.181 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:11.181 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:11.181 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:11.181 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:11.181 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:11.181 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:11.181 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:11.181 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:11.181 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:11.181 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:11.181 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:11.465 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:11.465 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:11.465 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:11.465 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.465 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:11.465 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.465 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:16:11.465 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.MzioGRP3p9 /tmp/tmp.gcAwbKJjTu /tmp/tmp.ZeUWax8M3X 00:16:11.465 ************************************ 00:16:11.465 END TEST nvmf_tls 00:16:11.465 ************************************ 00:16:11.465 00:16:11.465 real 1m24.510s 00:16:11.465 user 2m18.729s 00:16:11.465 sys 0m27.203s 00:16:11.465 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:11.465 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:11.465 10:10:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:11.465 10:10:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:11.465 10:10:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:11.465 10:10:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:11.465 ************************************ 00:16:11.465 START TEST nvmf_fips 00:16:11.465 ************************************ 00:16:11.465 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:11.465 * Looking for test storage... 00:16:11.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:16:11.465 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:11.465 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:11.465 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:11.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.725 --rc genhtml_branch_coverage=1 00:16:11.725 --rc genhtml_function_coverage=1 00:16:11.725 --rc genhtml_legend=1 00:16:11.725 --rc geninfo_all_blocks=1 00:16:11.725 --rc geninfo_unexecuted_blocks=1 00:16:11.725 00:16:11.725 ' 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:11.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.725 --rc genhtml_branch_coverage=1 00:16:11.725 --rc genhtml_function_coverage=1 00:16:11.725 --rc genhtml_legend=1 00:16:11.725 --rc geninfo_all_blocks=1 00:16:11.725 --rc geninfo_unexecuted_blocks=1 00:16:11.725 00:16:11.725 ' 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:11.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.725 --rc genhtml_branch_coverage=1 00:16:11.725 --rc genhtml_function_coverage=1 00:16:11.725 --rc genhtml_legend=1 00:16:11.725 --rc geninfo_all_blocks=1 00:16:11.725 --rc geninfo_unexecuted_blocks=1 00:16:11.725 00:16:11.725 ' 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:11.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.725 --rc genhtml_branch_coverage=1 00:16:11.725 --rc genhtml_function_coverage=1 00:16:11.725 --rc genhtml_legend=1 00:16:11.725 --rc geninfo_all_blocks=1 00:16:11.725 --rc geninfo_unexecuted_blocks=1 00:16:11.725 00:16:11.725 ' 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.725 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:11.726 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.726 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:16:11.727 Error setting digest 00:16:11.727 4052FF53217F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:16:11.727 4052FF53217F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:11.727 Cannot find device "nvmf_init_br" 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:11.727 Cannot find device "nvmf_init_br2" 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:11.727 Cannot find device "nvmf_tgt_br" 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:16:11.727 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:11.986 Cannot find device "nvmf_tgt_br2" 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:11.986 Cannot find device "nvmf_init_br" 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:11.986 Cannot find device "nvmf_init_br2" 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:11.986 Cannot find device "nvmf_tgt_br" 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:11.986 Cannot find device "nvmf_tgt_br2" 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:11.986 Cannot find device "nvmf_br" 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:11.986 Cannot find device "nvmf_init_if" 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:11.986 Cannot find device "nvmf_init_if2" 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:11.986 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:11.986 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:11.986 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:11.987 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:11.987 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:11.987 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:11.987 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:11.987 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:11.987 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:11.987 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:11.987 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:11.987 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:11.987 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:11.987 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:12.245 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:12.246 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:12.246 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:16:12.246 00:16:12.246 --- 10.0.0.3 ping statistics --- 00:16:12.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.246 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:12.246 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:12.246 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:16:12.246 00:16:12.246 --- 10.0.0.4 ping statistics --- 00:16:12.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.246 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:12.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:12.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:16:12.246 00:16:12.246 --- 10.0.0.1 ping statistics --- 00:16:12.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.246 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:12.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:12.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:16:12.246 00:16:12.246 --- 10.0.0.2 ping statistics --- 00:16:12.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.246 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:12.246 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:12.246 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:16:12.246 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:12.246 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:12.246 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:12.246 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72801 00:16:12.246 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:12.246 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72801 00:16:12.246 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72801 ']' 00:16:12.246 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.246 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.246 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.246 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.246 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:12.246 [2024-11-19 10:10:26.083595] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:16:12.246 [2024-11-19 10:10:26.083683] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.504 [2024-11-19 10:10:26.231510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.504 [2024-11-19 10:10:26.298686] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:12.504 [2024-11-19 10:10:26.298741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:12.504 [2024-11-19 10:10:26.298755] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:12.504 [2024-11-19 10:10:26.298766] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:12.504 [2024-11-19 10:10:26.298775] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:12.504 [2024-11-19 10:10:26.299221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.504 [2024-11-19 10:10:26.358250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:13.439 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:13.439 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:16:13.439 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:13.439 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:13.439 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:13.439 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:13.439 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:16:13.439 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:13.439 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:16:13.439 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.LrG 00:16:13.439 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:13.439 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.LrG 00:16:13.439 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.LrG 00:16:13.439 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.LrG 00:16:13.439 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:13.698 [2024-11-19 10:10:27.440525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:13.698 [2024-11-19 10:10:27.456473] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:13.698 [2024-11-19 10:10:27.456708] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:13.698 malloc0 00:16:13.698 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:13.698 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72843 00:16:13.698 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:13.698 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72843 /var/tmp/bdevperf.sock 00:16:13.698 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72843 ']' 00:16:13.698 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:13.698 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:13.698 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:13.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:13.698 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:13.698 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:13.957 [2024-11-19 10:10:27.604386] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:16:13.957 [2024-11-19 10:10:27.604498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72843 ] 00:16:13.957 [2024-11-19 10:10:27.757408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.957 [2024-11-19 10:10:27.827466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.215 [2024-11-19 10:10:27.885026] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:14.780 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.780 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:16:14.780 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.LrG 00:16:15.038 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:15.295 [2024-11-19 10:10:29.106151] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:15.295 TLSTESTn1 00:16:15.553 10:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:15.553 Running I/O for 10 seconds... 00:16:17.859 3902.00 IOPS, 15.24 MiB/s [2024-11-19T10:10:32.346Z] 3961.00 IOPS, 15.47 MiB/s [2024-11-19T10:10:33.742Z] 3970.33 IOPS, 15.51 MiB/s [2024-11-19T10:10:34.677Z] 4006.50 IOPS, 15.65 MiB/s [2024-11-19T10:10:35.611Z] 4011.00 IOPS, 15.67 MiB/s [2024-11-19T10:10:36.546Z] 4019.67 IOPS, 15.70 MiB/s [2024-11-19T10:10:37.481Z] 4024.29 IOPS, 15.72 MiB/s [2024-11-19T10:10:38.415Z] 4031.50 IOPS, 15.75 MiB/s [2024-11-19T10:10:39.352Z] 4033.11 IOPS, 15.75 MiB/s [2024-11-19T10:10:39.352Z] 4038.50 IOPS, 15.78 MiB/s 00:16:25.463 Latency(us) 00:16:25.463 [2024-11-19T10:10:39.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.463 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:25.463 Verification LBA range: start 0x0 length 0x2000 00:16:25.463 TLSTESTn1 : 10.02 4044.27 15.80 0.00 0.00 31591.73 5659.93 25737.77 00:16:25.463 [2024-11-19T10:10:39.352Z] =================================================================================================================== 00:16:25.463 [2024-11-19T10:10:39.352Z] Total : 4044.27 15.80 0.00 0.00 31591.73 5659.93 25737.77 00:16:25.463 { 00:16:25.463 "results": [ 00:16:25.463 { 00:16:25.463 "job": "TLSTESTn1", 00:16:25.463 "core_mask": "0x4", 00:16:25.463 "workload": "verify", 00:16:25.463 "status": "finished", 00:16:25.463 "verify_range": { 00:16:25.463 "start": 0, 00:16:25.463 "length": 8192 00:16:25.463 }, 00:16:25.463 "queue_depth": 128, 00:16:25.463 "io_size": 4096, 00:16:25.463 "runtime": 10.016892, 00:16:25.463 "iops": 4044.268421781926, 00:16:25.463 "mibps": 15.797923522585648, 00:16:25.463 "io_failed": 0, 00:16:25.463 "io_timeout": 0, 00:16:25.463 "avg_latency_us": 31591.73387178791, 00:16:25.463 "min_latency_us": 5659.927272727273, 00:16:25.463 "max_latency_us": 25737.774545454544 00:16:25.463 } 00:16:25.463 ], 00:16:25.463 "core_count": 1 00:16:25.463 } 00:16:25.721 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:16:25.721 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:16:25.722 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:16:25.722 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:16:25.722 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:16:25.722 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:25.722 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:16:25.722 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:16:25.722 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:16:25.722 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:25.722 nvmf_trace.0 00:16:25.722 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:16:25.722 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72843 00:16:25.722 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72843 ']' 00:16:25.722 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72843 00:16:25.722 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:16:25.722 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.722 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72843 00:16:25.722 killing process with pid 72843 00:16:25.722 Received shutdown signal, test time was about 10.000000 seconds 00:16:25.722 00:16:25.722 Latency(us) 00:16:25.722 [2024-11-19T10:10:39.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.722 [2024-11-19T10:10:39.611Z] =================================================================================================================== 00:16:25.722 [2024-11-19T10:10:39.611Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:25.722 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:25.722 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:25.722 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72843' 00:16:25.722 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72843 00:16:25.722 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72843 00:16:25.981 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:16:25.981 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:25.981 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:16:25.981 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:25.981 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:16:25.981 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:25.981 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:25.981 rmmod nvme_tcp 00:16:25.981 rmmod nvme_fabrics 00:16:25.981 rmmod nvme_keyring 00:16:25.981 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:25.981 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:16:25.981 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:16:25.981 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72801 ']' 00:16:25.981 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72801 00:16:25.981 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72801 ']' 00:16:25.981 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72801 00:16:25.981 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:16:25.981 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.981 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72801 00:16:26.240 killing process with pid 72801 00:16:26.240 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:26.240 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:26.240 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72801' 00:16:26.240 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72801 00:16:26.240 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72801 00:16:26.240 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:26.240 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:26.240 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:26.240 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:16:26.240 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:16:26.240 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:26.240 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:16:26.240 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:26.240 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:26.240 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:26.240 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:26.497 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:26.497 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:26.497 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:26.497 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:26.497 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:26.497 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:26.497 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:26.497 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:26.497 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:26.497 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:26.497 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:26.497 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:26.497 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.497 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.497 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.497 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:16:26.497 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.LrG 00:16:26.497 00:16:26.497 real 0m15.167s 00:16:26.497 user 0m21.356s 00:16:26.497 sys 0m5.611s 00:16:26.497 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.497 ************************************ 00:16:26.497 END TEST nvmf_fips 00:16:26.497 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:26.497 ************************************ 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:26.755 ************************************ 00:16:26.755 START TEST nvmf_control_msg_list 00:16:26.755 ************************************ 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:16:26.755 * Looking for test storage... 00:16:26.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:26.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.755 --rc genhtml_branch_coverage=1 00:16:26.755 --rc genhtml_function_coverage=1 00:16:26.755 --rc genhtml_legend=1 00:16:26.755 --rc geninfo_all_blocks=1 00:16:26.755 --rc geninfo_unexecuted_blocks=1 00:16:26.755 00:16:26.755 ' 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:26.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.755 --rc genhtml_branch_coverage=1 00:16:26.755 --rc genhtml_function_coverage=1 00:16:26.755 --rc genhtml_legend=1 00:16:26.755 --rc geninfo_all_blocks=1 00:16:26.755 --rc geninfo_unexecuted_blocks=1 00:16:26.755 00:16:26.755 ' 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:26.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.755 --rc genhtml_branch_coverage=1 00:16:26.755 --rc genhtml_function_coverage=1 00:16:26.755 --rc genhtml_legend=1 00:16:26.755 --rc geninfo_all_blocks=1 00:16:26.755 --rc geninfo_unexecuted_blocks=1 00:16:26.755 00:16:26.755 ' 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:26.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.755 --rc genhtml_branch_coverage=1 00:16:26.755 --rc genhtml_function_coverage=1 00:16:26.755 --rc genhtml_legend=1 00:16:26.755 --rc geninfo_all_blocks=1 00:16:26.755 --rc geninfo_unexecuted_blocks=1 00:16:26.755 00:16:26.755 ' 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.755 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:26.756 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:26.756 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:27.014 Cannot find device "nvmf_init_br" 00:16:27.014 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:16:27.014 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:27.014 Cannot find device "nvmf_init_br2" 00:16:27.014 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:16:27.014 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:27.014 Cannot find device "nvmf_tgt_br" 00:16:27.014 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:16:27.014 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:27.014 Cannot find device "nvmf_tgt_br2" 00:16:27.014 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:16:27.014 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:27.014 Cannot find device "nvmf_init_br" 00:16:27.014 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:16:27.014 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:27.014 Cannot find device "nvmf_init_br2" 00:16:27.014 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:16:27.014 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:27.014 Cannot find device "nvmf_tgt_br" 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:27.015 Cannot find device "nvmf_tgt_br2" 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:27.015 Cannot find device "nvmf_br" 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:27.015 Cannot find device "nvmf_init_if" 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:27.015 Cannot find device "nvmf_init_if2" 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:27.015 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:27.015 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:27.015 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:27.273 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:27.273 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:27.273 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:27.274 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:27.274 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:27.274 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:27.274 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:27.274 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:27.274 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:27.274 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:27.274 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:27.274 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:27.274 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:27.274 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:27.274 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:27.274 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:27.274 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:16:27.274 00:16:27.274 --- 10.0.0.3 ping statistics --- 00:16:27.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.274 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:16:27.274 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:27.274 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:27.274 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:16:27.274 00:16:27.274 --- 10.0.0.4 ping statistics --- 00:16:27.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.274 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:27.274 10:10:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:27.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:27.274 00:16:27.274 --- 10.0.0.1 ping statistics --- 00:16:27.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.274 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:27.274 10:10:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:27.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:16:27.274 00:16:27.274 --- 10.0.0.2 ping statistics --- 00:16:27.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.274 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:16:27.274 10:10:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.274 10:10:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:16:27.274 10:10:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:27.274 10:10:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.274 10:10:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:27.274 10:10:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:27.274 10:10:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.274 10:10:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:27.274 10:10:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:27.274 10:10:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:16:27.274 10:10:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:27.274 10:10:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:27.274 10:10:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:27.274 10:10:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73230 00:16:27.274 10:10:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:27.274 10:10:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73230 00:16:27.274 10:10:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73230 ']' 00:16:27.274 10:10:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.274 10:10:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.274 10:10:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.274 10:10:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.274 10:10:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:27.274 [2024-11-19 10:10:41.106432] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:16:27.274 [2024-11-19 10:10:41.106542] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.532 [2024-11-19 10:10:41.261434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.532 [2024-11-19 10:10:41.329200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.532 [2024-11-19 10:10:41.329276] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.532 [2024-11-19 10:10:41.329289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.532 [2024-11-19 10:10:41.329300] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.532 [2024-11-19 10:10:41.329309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.532 [2024-11-19 10:10:41.329756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.532 [2024-11-19 10:10:41.390297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:28.467 [2024-11-19 10:10:42.181453] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:28.467 Malloc0 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:28.467 [2024-11-19 10:10:42.224933] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73262 00:16:28.467 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:28.468 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73263 00:16:28.468 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:28.468 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73264 00:16:28.468 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:28.468 10:10:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73262 00:16:28.726 [2024-11-19 10:10:42.409609] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:28.726 [2024-11-19 10:10:42.409812] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:28.726 [2024-11-19 10:10:42.419566] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:29.661 Initializing NVMe Controllers 00:16:29.661 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:29.661 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:16:29.661 Initialization complete. Launching workers. 00:16:29.661 ======================================================== 00:16:29.661 Latency(us) 00:16:29.661 Device Information : IOPS MiB/s Average min max 00:16:29.661 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3390.00 13.24 294.59 194.51 805.56 00:16:29.661 ======================================================== 00:16:29.661 Total : 3390.00 13.24 294.59 194.51 805.56 00:16:29.661 00:16:29.661 Initializing NVMe Controllers 00:16:29.661 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:29.661 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:16:29.661 Initialization complete. Launching workers. 00:16:29.661 ======================================================== 00:16:29.661 Latency(us) 00:16:29.661 Device Information : IOPS MiB/s Average min max 00:16:29.661 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3379.00 13.20 295.48 196.02 788.93 00:16:29.661 ======================================================== 00:16:29.661 Total : 3379.00 13.20 295.48 196.02 788.93 00:16:29.661 00:16:29.661 Initializing NVMe Controllers 00:16:29.661 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:29.661 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:16:29.661 Initialization complete. Launching workers. 00:16:29.661 ======================================================== 00:16:29.661 Latency(us) 00:16:29.661 Device Information : IOPS MiB/s Average min max 00:16:29.661 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3407.95 13.31 293.01 122.91 833.83 00:16:29.661 ======================================================== 00:16:29.661 Total : 3407.95 13.31 293.01 122.91 833.83 00:16:29.661 00:16:29.661 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73263 00:16:29.661 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73264 00:16:29.661 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:29.661 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:16:29.661 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:29.661 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:16:29.661 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:29.661 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:16:29.661 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:29.661 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:29.661 rmmod nvme_tcp 00:16:29.661 rmmod nvme_fabrics 00:16:29.661 rmmod nvme_keyring 00:16:29.919 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:29.919 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:16:29.919 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:16:29.919 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73230 ']' 00:16:29.919 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73230 00:16:29.919 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73230 ']' 00:16:29.919 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73230 00:16:29.919 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:16:29.919 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:29.919 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73230 00:16:29.919 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:29.919 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:29.919 killing process with pid 73230 00:16:29.919 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73230' 00:16:29.919 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73230 00:16:29.919 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73230 00:16:30.207 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:30.207 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:30.207 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:30.207 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:16:30.207 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:16:30.207 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:30.207 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:16:30.207 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:30.207 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:30.207 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:30.207 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:30.207 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:30.207 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:30.207 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:30.207 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:30.207 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:30.207 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:30.207 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:30.207 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:30.207 10:10:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:30.207 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:30.207 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:30.207 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:30.207 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.207 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.207 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:16:30.467 00:16:30.467 real 0m3.672s 00:16:30.467 user 0m5.773s 00:16:30.467 sys 0m1.361s 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.467 ************************************ 00:16:30.467 END TEST nvmf_control_msg_list 00:16:30.467 ************************************ 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:30.467 ************************************ 00:16:30.467 START TEST nvmf_wait_for_buf 00:16:30.467 ************************************ 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:16:30.467 * Looking for test storage... 00:16:30.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:30.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.467 --rc genhtml_branch_coverage=1 00:16:30.467 --rc genhtml_function_coverage=1 00:16:30.467 --rc genhtml_legend=1 00:16:30.467 --rc geninfo_all_blocks=1 00:16:30.467 --rc geninfo_unexecuted_blocks=1 00:16:30.467 00:16:30.467 ' 00:16:30.467 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:30.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.468 --rc genhtml_branch_coverage=1 00:16:30.468 --rc genhtml_function_coverage=1 00:16:30.468 --rc genhtml_legend=1 00:16:30.468 --rc geninfo_all_blocks=1 00:16:30.468 --rc geninfo_unexecuted_blocks=1 00:16:30.468 00:16:30.468 ' 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:30.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.468 --rc genhtml_branch_coverage=1 00:16:30.468 --rc genhtml_function_coverage=1 00:16:30.468 --rc genhtml_legend=1 00:16:30.468 --rc geninfo_all_blocks=1 00:16:30.468 --rc geninfo_unexecuted_blocks=1 00:16:30.468 00:16:30.468 ' 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:30.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.468 --rc genhtml_branch_coverage=1 00:16:30.468 --rc genhtml_function_coverage=1 00:16:30.468 --rc genhtml_legend=1 00:16:30.468 --rc geninfo_all_blocks=1 00:16:30.468 --rc geninfo_unexecuted_blocks=1 00:16:30.468 00:16:30.468 ' 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:30.468 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.468 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:30.727 Cannot find device "nvmf_init_br" 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:30.727 Cannot find device "nvmf_init_br2" 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:30.727 Cannot find device "nvmf_tgt_br" 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:30.727 Cannot find device "nvmf_tgt_br2" 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:30.727 Cannot find device "nvmf_init_br" 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:30.727 Cannot find device "nvmf_init_br2" 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:30.727 Cannot find device "nvmf_tgt_br" 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:30.727 Cannot find device "nvmf_tgt_br2" 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:30.727 Cannot find device "nvmf_br" 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:30.727 Cannot find device "nvmf_init_if" 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:30.727 Cannot find device "nvmf_init_if2" 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:30.727 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:30.727 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:16:30.727 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:30.728 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:30.728 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:30.728 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:30.728 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:30.728 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:30.728 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:30.728 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:30.728 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:30.986 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:30.986 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:16:30.986 00:16:30.986 --- 10.0.0.3 ping statistics --- 00:16:30.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.986 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:30.986 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:30.986 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:16:30.986 00:16:30.986 --- 10.0.0.4 ping statistics --- 00:16:30.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.986 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:30.986 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:30.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:16:30.986 00:16:30.987 --- 10.0.0.1 ping statistics --- 00:16:30.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.987 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:16:30.987 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:30.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:16:30.987 00:16:30.987 --- 10.0.0.2 ping statistics --- 00:16:30.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.987 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:30.987 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.987 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:16:30.987 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:30.987 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.987 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:30.987 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:30.987 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.987 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:30.987 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:30.987 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:16:30.987 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:30.987 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:30.987 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:30.987 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73500 00:16:30.987 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:30.987 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73500 00:16:30.987 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73500 ']' 00:16:30.987 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.987 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:30.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.987 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.987 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:30.987 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:30.987 [2024-11-19 10:10:44.839830] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:16:30.987 [2024-11-19 10:10:44.839959] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.244 [2024-11-19 10:10:44.991639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.244 [2024-11-19 10:10:45.058145] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.244 [2024-11-19 10:10:45.058208] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.244 [2024-11-19 10:10:45.058228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.244 [2024-11-19 10:10:45.058239] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.244 [2024-11-19 10:10:45.058248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.244 [2024-11-19 10:10:45.058688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.178 10:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:32.178 10:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:16:32.178 10:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:32.178 10:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:32.178 10:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:32.178 10:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.178 10:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:16:32.178 10:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:16:32.178 10:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:16:32.178 10:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.178 10:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:32.178 10:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.178 10:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:16:32.178 10:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.178 10:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:32.178 10:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.178 10:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:16:32.178 10:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.178 10:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:32.178 [2024-11-19 10:10:45.951069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:32.178 10:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.178 10:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:16:32.178 10:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.178 10:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:32.178 Malloc0 00:16:32.178 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.178 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:16:32.178 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.178 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:32.178 [2024-11-19 10:10:46.021807] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:32.178 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.178 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:16:32.178 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.178 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:32.178 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.178 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:16:32.178 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.178 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:32.178 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.178 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:32.178 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.178 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:32.178 [2024-11-19 10:10:46.045914] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:32.178 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.178 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:32.436 [2024-11-19 10:10:46.240110] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:33.903 Initializing NVMe Controllers 00:16:33.903 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:33.903 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:16:33.903 Initialization complete. Launching workers. 00:16:33.903 ======================================================== 00:16:33.903 Latency(us) 00:16:33.903 Device Information : IOPS MiB/s Average min max 00:16:33.903 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 501.98 62.75 7999.04 6966.63 11029.99 00:16:33.903 ======================================================== 00:16:33.903 Total : 501.98 62.75 7999.04 6966.63 11029.99 00:16:33.903 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4756 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4756 -eq 0 ]] 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:33.903 rmmod nvme_tcp 00:16:33.903 rmmod nvme_fabrics 00:16:33.903 rmmod nvme_keyring 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73500 ']' 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73500 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73500 ']' 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73500 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73500 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:33.903 killing process with pid 73500 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73500' 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73500 00:16:33.903 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73500 00:16:34.163 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:34.163 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:34.163 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:34.163 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:16:34.163 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:16:34.163 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:34.163 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:16:34.163 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:34.163 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:34.163 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:34.163 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:34.163 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:34.163 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:34.163 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:34.163 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:34.163 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:34.163 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:34.163 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:34.422 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:34.422 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:34.422 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:34.422 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:34.422 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:34.422 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.422 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:34.422 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.422 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:16:34.422 00:16:34.422 real 0m4.064s 00:16:34.422 user 0m3.580s 00:16:34.422 sys 0m0.831s 00:16:34.422 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.422 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:34.422 ************************************ 00:16:34.422 END TEST nvmf_wait_for_buf 00:16:34.422 ************************************ 00:16:34.422 10:10:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:16:34.422 10:10:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:16:34.422 10:10:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:16:34.422 10:10:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:34.422 10:10:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.422 10:10:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:34.422 ************************************ 00:16:34.422 START TEST nvmf_nsid 00:16:34.422 ************************************ 00:16:34.422 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:16:34.681 * Looking for test storage... 00:16:34.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:34.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.681 --rc genhtml_branch_coverage=1 00:16:34.681 --rc genhtml_function_coverage=1 00:16:34.681 --rc genhtml_legend=1 00:16:34.681 --rc geninfo_all_blocks=1 00:16:34.681 --rc geninfo_unexecuted_blocks=1 00:16:34.681 00:16:34.681 ' 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:34.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.681 --rc genhtml_branch_coverage=1 00:16:34.681 --rc genhtml_function_coverage=1 00:16:34.681 --rc genhtml_legend=1 00:16:34.681 --rc geninfo_all_blocks=1 00:16:34.681 --rc geninfo_unexecuted_blocks=1 00:16:34.681 00:16:34.681 ' 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:34.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.681 --rc genhtml_branch_coverage=1 00:16:34.681 --rc genhtml_function_coverage=1 00:16:34.681 --rc genhtml_legend=1 00:16:34.681 --rc geninfo_all_blocks=1 00:16:34.681 --rc geninfo_unexecuted_blocks=1 00:16:34.681 00:16:34.681 ' 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:34.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.681 --rc genhtml_branch_coverage=1 00:16:34.681 --rc genhtml_function_coverage=1 00:16:34.681 --rc genhtml_legend=1 00:16:34.681 --rc geninfo_all_blocks=1 00:16:34.681 --rc geninfo_unexecuted_blocks=1 00:16:34.681 00:16:34.681 ' 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.681 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:34.682 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:34.682 Cannot find device "nvmf_init_br" 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:34.682 Cannot find device "nvmf_init_br2" 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:34.682 Cannot find device "nvmf_tgt_br" 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:34.682 Cannot find device "nvmf_tgt_br2" 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:34.682 Cannot find device "nvmf_init_br" 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:34.682 Cannot find device "nvmf_init_br2" 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:34.682 Cannot find device "nvmf_tgt_br" 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:34.682 Cannot find device "nvmf_tgt_br2" 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:16:34.682 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:34.939 Cannot find device "nvmf_br" 00:16:34.939 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:16:34.939 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:34.939 Cannot find device "nvmf_init_if" 00:16:34.939 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:16:34.939 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:34.939 Cannot find device "nvmf_init_if2" 00:16:34.939 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:16:34.939 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:34.939 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:34.939 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:16:34.939 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:34.939 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:34.939 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:16:34.939 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:34.939 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:34.939 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:34.939 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:34.939 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:34.939 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:34.940 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:34.940 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:34.940 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:34.940 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:34.940 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:34.940 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:34.940 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:34.940 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:34.940 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:34.940 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:34.940 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:34.940 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:34.940 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:34.940 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:34.940 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:34.940 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:34.940 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:34.940 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:34.940 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:34.940 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:35.211 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:35.211 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:35.212 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:35.212 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:35.212 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:35.212 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:35.212 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:35.212 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:35.212 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:16:35.212 00:16:35.212 --- 10.0.0.3 ping statistics --- 00:16:35.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.212 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:16:35.212 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:35.212 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:35.212 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:16:35.212 00:16:35.213 --- 10.0.0.4 ping statistics --- 00:16:35.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.213 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:16:35.213 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:35.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:35.213 00:16:35.213 --- 10.0.0.1 ping statistics --- 00:16:35.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.213 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:35.213 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:35.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:16:35.213 00:16:35.213 --- 10.0.0.2 ping statistics --- 00:16:35.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.213 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:16:35.213 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.213 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:16:35.213 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:35.213 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.213 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:35.214 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:35.214 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.214 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:35.214 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:35.214 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:16:35.214 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:35.214 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:35.214 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:35.214 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73772 00:16:35.214 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:16:35.214 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73772 00:16:35.214 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73772 ']' 00:16:35.214 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.214 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.214 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.214 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.214 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:35.214 [2024-11-19 10:10:48.976760] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:16:35.214 [2024-11-19 10:10:48.976883] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.475 [2024-11-19 10:10:49.132595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.475 [2024-11-19 10:10:49.199049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.475 [2024-11-19 10:10:49.199132] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.475 [2024-11-19 10:10:49.199158] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.475 [2024-11-19 10:10:49.199168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.475 [2024-11-19 10:10:49.199177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.475 [2024-11-19 10:10:49.199673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.475 [2024-11-19 10:10:49.260268] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:35.475 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.475 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:16:35.475 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:35.475 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:35.475 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73791 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=9dbd0ad7-9e0a-42db-a589-fb65c424745b 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=ad9c8ea2-b3dc-4a93-abed-4834f208e333 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=ec84ed30-7a76-4979-9b14-352512862e7a 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:35.732 null0 00:16:35.732 null1 00:16:35.732 null2 00:16:35.732 [2024-11-19 10:10:49.433324] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.732 [2024-11-19 10:10:49.440321] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:16:35.732 [2024-11-19 10:10:49.440474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73791 ] 00:16:35.732 [2024-11-19 10:10:49.457423] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73791 /var/tmp/tgt2.sock 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73791 ']' 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.732 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:35.732 [2024-11-19 10:10:49.587769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.990 [2024-11-19 10:10:49.656458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.990 [2024-11-19 10:10:49.734028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:36.248 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.248 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:16:36.248 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:16:36.507 [2024-11-19 10:10:50.369910] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.507 [2024-11-19 10:10:50.386076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:16:36.765 nvme0n1 nvme0n2 00:16:36.765 nvme1n1 00:16:36.765 10:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:16:36.765 10:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:16:36.765 10:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid=6147973c-080a-4377-b1e7-85172bdc559a 00:16:36.765 10:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:16:36.765 10:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:16:36.765 10:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:16:36.765 10:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:16:36.765 10:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:16:36.765 10:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:16:36.765 10:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:16:36.765 10:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:36.765 10:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:36.766 10:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:36.766 10:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:16:36.766 10:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:16:36.766 10:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:16:37.698 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:37.698 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:37.956 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:37.956 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:37.956 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:37.956 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 9dbd0ad7-9e0a-42db-a589-fb65c424745b 00:16:37.956 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:37.956 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:16:37.956 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:16:37.956 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:16:37.956 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:37.956 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9dbd0ad79e0a42dba589fb65c424745b 00:16:37.956 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9DBD0AD79E0A42DBA589FB65C424745B 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 9DBD0AD79E0A42DBA589FB65C424745B == \9\D\B\D\0\A\D\7\9\E\0\A\4\2\D\B\A\5\8\9\F\B\6\5\C\4\2\4\7\4\5\B ]] 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid ad9c8ea2-b3dc-4a93-abed-4834f208e333 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ad9c8ea2b3dc4a93abed4834f208e333 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo AD9C8EA2B3DC4A93ABED4834F208E333 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ AD9C8EA2B3DC4A93ABED4834F208E333 == \A\D\9\C\8\E\A\2\B\3\D\C\4\A\9\3\A\B\E\D\4\8\3\4\F\2\0\8\E\3\3\3 ]] 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid ec84ed30-7a76-4979-9b14-352512862e7a 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ec84ed307a7649799b14352512862e7a 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo EC84ED307A7649799B14352512862E7A 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ EC84ED307A7649799B14352512862E7A == \E\C\8\4\E\D\3\0\7\A\7\6\4\9\7\9\9\B\1\4\3\5\2\5\1\2\8\6\2\E\7\A ]] 00:16:37.957 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:16:38.215 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:16:38.215 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:16:38.215 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73791 00:16:38.215 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73791 ']' 00:16:38.215 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73791 00:16:38.215 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:16:38.215 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.215 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73791 00:16:38.215 killing process with pid 73791 00:16:38.215 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:38.215 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:38.215 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73791' 00:16:38.215 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73791 00:16:38.215 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73791 00:16:38.782 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:16:38.782 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:38.782 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:16:38.783 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:38.783 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:16:38.783 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:38.783 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:38.783 rmmod nvme_tcp 00:16:38.783 rmmod nvme_fabrics 00:16:38.783 rmmod nvme_keyring 00:16:38.783 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:38.783 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:16:38.783 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:16:38.783 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73772 ']' 00:16:38.783 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73772 00:16:38.783 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73772 ']' 00:16:38.783 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73772 00:16:38.783 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:16:38.783 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.783 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73772 00:16:38.783 killing process with pid 73772 00:16:38.783 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:38.783 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:38.783 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73772' 00:16:38.783 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73772 00:16:38.783 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73772 00:16:39.042 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:39.042 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:39.042 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:39.042 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:16:39.042 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:16:39.042 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:39.042 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:16:39.042 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:39.042 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:39.042 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:39.042 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:39.042 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:39.042 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:39.042 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:39.042 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:39.042 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:39.042 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:39.042 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:39.042 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:39.042 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:39.299 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:39.299 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:39.299 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:39.300 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.300 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:39.300 10:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.300 10:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:16:39.300 00:16:39.300 real 0m4.770s 00:16:39.300 user 0m6.974s 00:16:39.300 sys 0m1.730s 00:16:39.300 10:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.300 10:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:39.300 ************************************ 00:16:39.300 END TEST nvmf_nsid 00:16:39.300 ************************************ 00:16:39.300 10:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:39.300 ************************************ 00:16:39.300 END TEST nvmf_target_extra 00:16:39.300 ************************************ 00:16:39.300 00:16:39.300 real 5m15.549s 00:16:39.300 user 11m5.083s 00:16:39.300 sys 1m9.248s 00:16:39.300 10:10:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.300 10:10:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:39.300 10:10:53 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:39.300 10:10:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:39.300 10:10:53 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.300 10:10:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:39.300 ************************************ 00:16:39.300 START TEST nvmf_host 00:16:39.300 ************************************ 00:16:39.300 10:10:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:39.559 * Looking for test storage... 00:16:39.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:39.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.559 --rc genhtml_branch_coverage=1 00:16:39.559 --rc genhtml_function_coverage=1 00:16:39.559 --rc genhtml_legend=1 00:16:39.559 --rc geninfo_all_blocks=1 00:16:39.559 --rc geninfo_unexecuted_blocks=1 00:16:39.559 00:16:39.559 ' 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:39.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.559 --rc genhtml_branch_coverage=1 00:16:39.559 --rc genhtml_function_coverage=1 00:16:39.559 --rc genhtml_legend=1 00:16:39.559 --rc geninfo_all_blocks=1 00:16:39.559 --rc geninfo_unexecuted_blocks=1 00:16:39.559 00:16:39.559 ' 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:39.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.559 --rc genhtml_branch_coverage=1 00:16:39.559 --rc genhtml_function_coverage=1 00:16:39.559 --rc genhtml_legend=1 00:16:39.559 --rc geninfo_all_blocks=1 00:16:39.559 --rc geninfo_unexecuted_blocks=1 00:16:39.559 00:16:39.559 ' 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:39.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.559 --rc genhtml_branch_coverage=1 00:16:39.559 --rc genhtml_function_coverage=1 00:16:39.559 --rc genhtml_legend=1 00:16:39.559 --rc geninfo_all_blocks=1 00:16:39.559 --rc geninfo_unexecuted_blocks=1 00:16:39.559 00:16:39.559 ' 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.559 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:16:39.560 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:39.560 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:39.560 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.560 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.560 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.560 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:39.560 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:39.560 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:39.560 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:39.560 10:10:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:39.560 10:10:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:16:39.560 10:10:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:16:39.560 10:10:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:16:39.560 10:10:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:39.560 10:10:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:39.560 10:10:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.560 10:10:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.560 ************************************ 00:16:39.560 START TEST nvmf_identify 00:16:39.560 ************************************ 00:16:39.560 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:39.560 * Looking for test storage... 00:16:39.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:39.560 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:39.560 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:16:39.560 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:39.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.819 --rc genhtml_branch_coverage=1 00:16:39.819 --rc genhtml_function_coverage=1 00:16:39.819 --rc genhtml_legend=1 00:16:39.819 --rc geninfo_all_blocks=1 00:16:39.819 --rc geninfo_unexecuted_blocks=1 00:16:39.819 00:16:39.819 ' 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:39.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.819 --rc genhtml_branch_coverage=1 00:16:39.819 --rc genhtml_function_coverage=1 00:16:39.819 --rc genhtml_legend=1 00:16:39.819 --rc geninfo_all_blocks=1 00:16:39.819 --rc geninfo_unexecuted_blocks=1 00:16:39.819 00:16:39.819 ' 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:39.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.819 --rc genhtml_branch_coverage=1 00:16:39.819 --rc genhtml_function_coverage=1 00:16:39.819 --rc genhtml_legend=1 00:16:39.819 --rc geninfo_all_blocks=1 00:16:39.819 --rc geninfo_unexecuted_blocks=1 00:16:39.819 00:16:39.819 ' 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:39.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.819 --rc genhtml_branch_coverage=1 00:16:39.819 --rc genhtml_function_coverage=1 00:16:39.819 --rc genhtml_legend=1 00:16:39.819 --rc geninfo_all_blocks=1 00:16:39.819 --rc geninfo_unexecuted_blocks=1 00:16:39.819 00:16:39.819 ' 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.819 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:39.820 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:39.820 Cannot find device "nvmf_init_br" 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:39.820 Cannot find device "nvmf_init_br2" 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:39.820 Cannot find device "nvmf_tgt_br" 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:39.820 Cannot find device "nvmf_tgt_br2" 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:39.820 Cannot find device "nvmf_init_br" 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:39.820 Cannot find device "nvmf_init_br2" 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:39.820 Cannot find device "nvmf_tgt_br" 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:39.820 Cannot find device "nvmf_tgt_br2" 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:39.820 Cannot find device "nvmf_br" 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:39.820 Cannot find device "nvmf_init_if" 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:39.820 Cannot find device "nvmf_init_if2" 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:39.820 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:16:39.820 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:39.820 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:39.821 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:16:39.821 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:40.079 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:40.079 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:16:40.079 00:16:40.079 --- 10.0.0.3 ping statistics --- 00:16:40.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.079 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:40.079 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:40.079 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:16:40.079 00:16:40.079 --- 10.0.0.4 ping statistics --- 00:16:40.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.079 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:40.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:40.079 00:16:40.079 --- 10.0.0.1 ping statistics --- 00:16:40.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.079 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:40.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:16:40.079 00:16:40.079 --- 10.0.0.2 ping statistics --- 00:16:40.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.079 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:40.079 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:40.337 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74157 00:16:40.337 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:40.337 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:40.337 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74157 00:16:40.337 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 74157 ']' 00:16:40.337 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.337 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.337 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.337 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.337 10:10:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:40.337 [2024-11-19 10:10:54.019156] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:16:40.337 [2024-11-19 10:10:54.019293] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.337 [2024-11-19 10:10:54.162566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:40.337 [2024-11-19 10:10:54.225219] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.337 [2024-11-19 10:10:54.225275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.337 [2024-11-19 10:10:54.225286] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.337 [2024-11-19 10:10:54.225295] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.337 [2024-11-19 10:10:54.225302] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.596 [2024-11-19 10:10:54.226519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.596 [2024-11-19 10:10:54.226657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.596 [2024-11-19 10:10:54.227067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.596 [2024-11-19 10:10:54.227070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.596 [2024-11-19 10:10:54.282712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:40.596 [2024-11-19 10:10:54.355208] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:40.596 Malloc0 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:40.596 [2024-11-19 10:10:54.459347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.596 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:40.596 [ 00:16:40.596 { 00:16:40.596 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:40.596 "subtype": "Discovery", 00:16:40.596 "listen_addresses": [ 00:16:40.596 { 00:16:40.596 "trtype": "TCP", 00:16:40.596 "adrfam": "IPv4", 00:16:40.596 "traddr": "10.0.0.3", 00:16:40.596 "trsvcid": "4420" 00:16:40.596 } 00:16:40.596 ], 00:16:40.596 "allow_any_host": true, 00:16:40.596 "hosts": [] 00:16:40.596 }, 00:16:40.596 { 00:16:40.596 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:40.596 "subtype": "NVMe", 00:16:40.596 "listen_addresses": [ 00:16:40.596 { 00:16:40.596 "trtype": "TCP", 00:16:40.596 "adrfam": "IPv4", 00:16:40.596 "traddr": "10.0.0.3", 00:16:40.596 "trsvcid": "4420" 00:16:40.596 } 00:16:40.596 ], 00:16:40.596 "allow_any_host": true, 00:16:40.596 "hosts": [], 00:16:40.859 "serial_number": "SPDK00000000000001", 00:16:40.859 "model_number": "SPDK bdev Controller", 00:16:40.859 "max_namespaces": 32, 00:16:40.859 "min_cntlid": 1, 00:16:40.859 "max_cntlid": 65519, 00:16:40.859 "namespaces": [ 00:16:40.859 { 00:16:40.859 "nsid": 1, 00:16:40.859 "bdev_name": "Malloc0", 00:16:40.859 "name": "Malloc0", 00:16:40.859 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:16:40.859 "eui64": "ABCDEF0123456789", 00:16:40.859 "uuid": "4063b534-7fb5-4c81-a877-708531069481" 00:16:40.859 } 00:16:40.859 ] 00:16:40.859 } 00:16:40.859 ] 00:16:40.859 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.859 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:16:40.859 [2024-11-19 10:10:54.512462] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:16:40.859 [2024-11-19 10:10:54.512520] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74184 ] 00:16:40.859 [2024-11-19 10:10:54.678010] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:16:40.859 [2024-11-19 10:10:54.678096] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:40.859 [2024-11-19 10:10:54.678104] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:40.859 [2024-11-19 10:10:54.678118] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:40.859 [2024-11-19 10:10:54.678129] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:40.859 [2024-11-19 10:10:54.678464] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:16:40.859 [2024-11-19 10:10:54.678540] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c9d750 0 00:16:40.859 [2024-11-19 10:10:54.683938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:40.859 [2024-11-19 10:10:54.683965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:40.859 [2024-11-19 10:10:54.683972] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:40.859 [2024-11-19 10:10:54.683976] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:40.859 [2024-11-19 10:10:54.684009] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.859 [2024-11-19 10:10:54.684018] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.859 [2024-11-19 10:10:54.684023] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9d750) 00:16:40.859 [2024-11-19 10:10:54.684038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:40.859 [2024-11-19 10:10:54.684081] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01740, cid 0, qid 0 00:16:40.859 [2024-11-19 10:10:54.690944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.859 [2024-11-19 10:10:54.690974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.859 [2024-11-19 10:10:54.690983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.859 [2024-11-19 10:10:54.690992] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01740) on tqpair=0x1c9d750 00:16:40.859 [2024-11-19 10:10:54.691008] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:40.859 [2024-11-19 10:10:54.691019] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:16:40.859 [2024-11-19 10:10:54.691025] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:16:40.859 [2024-11-19 10:10:54.691043] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.859 [2024-11-19 10:10:54.691049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.859 [2024-11-19 10:10:54.691053] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9d750) 00:16:40.859 [2024-11-19 10:10:54.691063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.859 [2024-11-19 10:10:54.691092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01740, cid 0, qid 0 00:16:40.859 [2024-11-19 10:10:54.691165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.859 [2024-11-19 10:10:54.691173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.859 [2024-11-19 10:10:54.691177] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.859 [2024-11-19 10:10:54.691181] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01740) on tqpair=0x1c9d750 00:16:40.859 [2024-11-19 10:10:54.691188] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:16:40.859 [2024-11-19 10:10:54.691196] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:16:40.859 [2024-11-19 10:10:54.691205] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.859 [2024-11-19 10:10:54.691210] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.859 [2024-11-19 10:10:54.691214] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9d750) 00:16:40.859 [2024-11-19 10:10:54.691223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.859 [2024-11-19 10:10:54.691243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01740, cid 0, qid 0 00:16:40.859 [2024-11-19 10:10:54.691294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.859 [2024-11-19 10:10:54.691301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.859 [2024-11-19 10:10:54.691305] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.859 [2024-11-19 10:10:54.691309] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01740) on tqpair=0x1c9d750 00:16:40.859 [2024-11-19 10:10:54.691316] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:16:40.859 [2024-11-19 10:10:54.691325] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:16:40.859 [2024-11-19 10:10:54.691333] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.859 [2024-11-19 10:10:54.691338] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.859 [2024-11-19 10:10:54.691342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9d750) 00:16:40.859 [2024-11-19 10:10:54.691350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.859 [2024-11-19 10:10:54.691369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01740, cid 0, qid 0 00:16:40.859 [2024-11-19 10:10:54.691418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.859 [2024-11-19 10:10:54.691426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.859 [2024-11-19 10:10:54.691430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.859 [2024-11-19 10:10:54.691434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01740) on tqpair=0x1c9d750 00:16:40.859 [2024-11-19 10:10:54.691440] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:40.859 [2024-11-19 10:10:54.691452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.859 [2024-11-19 10:10:54.691457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.859 [2024-11-19 10:10:54.691461] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9d750) 00:16:40.860 [2024-11-19 10:10:54.691469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.860 [2024-11-19 10:10:54.691487] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01740, cid 0, qid 0 00:16:40.860 [2024-11-19 10:10:54.691531] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.860 [2024-11-19 10:10:54.691538] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.860 [2024-11-19 10:10:54.691542] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.691546] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01740) on tqpair=0x1c9d750 00:16:40.860 [2024-11-19 10:10:54.691552] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:16:40.860 [2024-11-19 10:10:54.691558] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:16:40.860 [2024-11-19 10:10:54.691566] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:40.860 [2024-11-19 10:10:54.691678] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:16:40.860 [2024-11-19 10:10:54.691684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:40.860 [2024-11-19 10:10:54.691695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.691699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.691704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9d750) 00:16:40.860 [2024-11-19 10:10:54.691711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.860 [2024-11-19 10:10:54.691732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01740, cid 0, qid 0 00:16:40.860 [2024-11-19 10:10:54.691777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.860 [2024-11-19 10:10:54.691785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.860 [2024-11-19 10:10:54.691789] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.691793] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01740) on tqpair=0x1c9d750 00:16:40.860 [2024-11-19 10:10:54.691799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:40.860 [2024-11-19 10:10:54.691810] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.691816] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.691820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9d750) 00:16:40.860 [2024-11-19 10:10:54.691828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.860 [2024-11-19 10:10:54.691847] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01740, cid 0, qid 0 00:16:40.860 [2024-11-19 10:10:54.691890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.860 [2024-11-19 10:10:54.691897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.860 [2024-11-19 10:10:54.691901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.691906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01740) on tqpair=0x1c9d750 00:16:40.860 [2024-11-19 10:10:54.691911] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:40.860 [2024-11-19 10:10:54.691932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:16:40.860 [2024-11-19 10:10:54.691942] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:16:40.860 [2024-11-19 10:10:54.691959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:16:40.860 [2024-11-19 10:10:54.691971] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.691976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9d750) 00:16:40.860 [2024-11-19 10:10:54.691984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.860 [2024-11-19 10:10:54.692006] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01740, cid 0, qid 0 00:16:40.860 [2024-11-19 10:10:54.692111] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:40.860 [2024-11-19 10:10:54.692120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:40.860 [2024-11-19 10:10:54.692124] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.692129] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9d750): datao=0, datal=4096, cccid=0 00:16:40.860 [2024-11-19 10:10:54.692134] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d01740) on tqpair(0x1c9d750): expected_datao=0, payload_size=4096 00:16:40.860 [2024-11-19 10:10:54.692139] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.692148] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.692153] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.692162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.860 [2024-11-19 10:10:54.692169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.860 [2024-11-19 10:10:54.692173] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.692177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01740) on tqpair=0x1c9d750 00:16:40.860 [2024-11-19 10:10:54.692186] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:16:40.860 [2024-11-19 10:10:54.692192] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:16:40.860 [2024-11-19 10:10:54.692197] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:16:40.860 [2024-11-19 10:10:54.692203] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:16:40.860 [2024-11-19 10:10:54.692208] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:16:40.860 [2024-11-19 10:10:54.692223] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:16:40.860 [2024-11-19 10:10:54.692238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:16:40.860 [2024-11-19 10:10:54.692247] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.692252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.692257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9d750) 00:16:40.860 [2024-11-19 10:10:54.692265] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:40.860 [2024-11-19 10:10:54.692287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01740, cid 0, qid 0 00:16:40.860 [2024-11-19 10:10:54.692347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.860 [2024-11-19 10:10:54.692355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.860 [2024-11-19 10:10:54.692359] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.692363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01740) on tqpair=0x1c9d750 00:16:40.860 [2024-11-19 10:10:54.692372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.692377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.692381] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9d750) 00:16:40.860 [2024-11-19 10:10:54.692388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:40.860 [2024-11-19 10:10:54.692395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.692399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.692403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c9d750) 00:16:40.860 [2024-11-19 10:10:54.692409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:40.860 [2024-11-19 10:10:54.692416] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.692428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.692432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c9d750) 00:16:40.860 [2024-11-19 10:10:54.692438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:40.860 [2024-11-19 10:10:54.692444] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.692448] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.692452] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d750) 00:16:40.860 [2024-11-19 10:10:54.692458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:40.860 [2024-11-19 10:10:54.692464] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:40.860 [2024-11-19 10:10:54.692478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:40.860 [2024-11-19 10:10:54.692487] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.860 [2024-11-19 10:10:54.692491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9d750) 00:16:40.860 [2024-11-19 10:10:54.692498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.860 [2024-11-19 10:10:54.692520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01740, cid 0, qid 0 00:16:40.860 [2024-11-19 10:10:54.692528] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d018c0, cid 1, qid 0 00:16:40.860 [2024-11-19 10:10:54.692533] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01a40, cid 2, qid 0 00:16:40.860 [2024-11-19 10:10:54.692539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01bc0, cid 3, qid 0 00:16:40.860 [2024-11-19 10:10:54.692544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01d40, cid 4, qid 0 00:16:40.860 [2024-11-19 10:10:54.692631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.860 [2024-11-19 10:10:54.692638] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.860 [2024-11-19 10:10:54.692642] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.692647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01d40) on tqpair=0x1c9d750 00:16:40.861 [2024-11-19 10:10:54.692653] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:16:40.861 [2024-11-19 10:10:54.692658] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:16:40.861 [2024-11-19 10:10:54.692671] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.692676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9d750) 00:16:40.861 [2024-11-19 10:10:54.692684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.861 [2024-11-19 10:10:54.692703] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01d40, cid 4, qid 0 00:16:40.861 [2024-11-19 10:10:54.692764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:40.861 [2024-11-19 10:10:54.692771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:40.861 [2024-11-19 10:10:54.692775] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.692779] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9d750): datao=0, datal=4096, cccid=4 00:16:40.861 [2024-11-19 10:10:54.692784] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d01d40) on tqpair(0x1c9d750): expected_datao=0, payload_size=4096 00:16:40.861 [2024-11-19 10:10:54.692789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.692796] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.692801] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.692810] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.861 [2024-11-19 10:10:54.692816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.861 [2024-11-19 10:10:54.692820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.692824] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01d40) on tqpair=0x1c9d750 00:16:40.861 [2024-11-19 10:10:54.692839] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:16:40.861 [2024-11-19 10:10:54.692872] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.692879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9d750) 00:16:40.861 [2024-11-19 10:10:54.692887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.861 [2024-11-19 10:10:54.692894] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.692899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.692903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c9d750) 00:16:40.861 [2024-11-19 10:10:54.692909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:40.861 [2024-11-19 10:10:54.692950] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01d40, cid 4, qid 0 00:16:40.861 [2024-11-19 10:10:54.692959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01ec0, cid 5, qid 0 00:16:40.861 [2024-11-19 10:10:54.693084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:40.861 [2024-11-19 10:10:54.693092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:40.861 [2024-11-19 10:10:54.693096] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.693100] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9d750): datao=0, datal=1024, cccid=4 00:16:40.861 [2024-11-19 10:10:54.693105] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d01d40) on tqpair(0x1c9d750): expected_datao=0, payload_size=1024 00:16:40.861 [2024-11-19 10:10:54.693109] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.693117] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.693122] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.693128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.861 [2024-11-19 10:10:54.693134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.861 [2024-11-19 10:10:54.693138] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.693142] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01ec0) on tqpair=0x1c9d750 00:16:40.861 [2024-11-19 10:10:54.693161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.861 [2024-11-19 10:10:54.693169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.861 [2024-11-19 10:10:54.693173] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.693177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01d40) on tqpair=0x1c9d750 00:16:40.861 [2024-11-19 10:10:54.693191] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.693196] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9d750) 00:16:40.861 [2024-11-19 10:10:54.693204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.861 [2024-11-19 10:10:54.693231] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01d40, cid 4, qid 0 00:16:40.861 [2024-11-19 10:10:54.693307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:40.861 [2024-11-19 10:10:54.693315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:40.861 [2024-11-19 10:10:54.693319] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.693323] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9d750): datao=0, datal=3072, cccid=4 00:16:40.861 [2024-11-19 10:10:54.693327] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d01d40) on tqpair(0x1c9d750): expected_datao=0, payload_size=3072 00:16:40.861 [2024-11-19 10:10:54.693332] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.693340] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.693344] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.693353] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.861 [2024-11-19 10:10:54.693359] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.861 [2024-11-19 10:10:54.693363] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.693367] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01d40) on tqpair=0x1c9d750 00:16:40.861 [2024-11-19 10:10:54.693378] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.693383] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9d750) 00:16:40.861 [2024-11-19 10:10:54.693391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.861 [2024-11-19 10:10:54.693416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01d40, cid 4, qid 0 00:16:40.861 [2024-11-19 10:10:54.693476] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:40.861 [2024-11-19 10:10:54.693483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:40.861 [2024-11-19 10:10:54.693487] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:40.861 [2024-11-19 10:10:54.693491] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9d750): datao=0, datal=8, cccid=4 00:16:40.861 [2024-11-19 10:10:54.693504] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d01d40) on tqpair(0x1c9d750): expected_datao=0, payload_size=8 00:16:40.861 [2024-11-19 10:10:54.693509] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.861 ===================================================== 00:16:40.861 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:40.861 ===================================================== 00:16:40.861 Controller Capabilities/Features 00:16:40.861 ================================ 00:16:40.861 Vendor ID: 0000 00:16:40.861 Subsystem Vendor ID: 0000 00:16:40.861 Serial Number: .................... 00:16:40.861 Model Number: ........................................ 00:16:40.861 Firmware Version: 25.01 00:16:40.861 Recommended Arb Burst: 0 00:16:40.861 IEEE OUI Identifier: 00 00 00 00:16:40.861 Multi-path I/O 00:16:40.861 May have multiple subsystem ports: No 00:16:40.861 May have multiple controllers: No 00:16:40.861 Associated with SR-IOV VF: No 00:16:40.861 Max Data Transfer Size: 131072 00:16:40.861 Max Number of Namespaces: 0 00:16:40.861 Max Number of I/O Queues: 1024 00:16:40.861 NVMe Specification Version (VS): 1.3 00:16:40.861 NVMe Specification Version (Identify): 1.3 00:16:40.861 Maximum Queue Entries: 128 00:16:40.861 Contiguous Queues Required: Yes 00:16:40.861 Arbitration Mechanisms Supported 00:16:40.861 Weighted Round Robin: Not Supported 00:16:40.861 Vendor Specific: Not Supported 00:16:40.861 Reset Timeout: 15000 ms 00:16:40.861 Doorbell Stride: 4 bytes 00:16:40.861 NVM Subsystem Reset: Not Supported 00:16:40.861 Command Sets Supported 00:16:40.861 NVM Command Set: Supported 00:16:40.861 Boot Partition: Not Supported 00:16:40.861 Memory Page Size Minimum: 4096 bytes 00:16:40.861 Memory Page Size Maximum: 4096 bytes 00:16:40.861 Persistent Memory Region: Not Supported 00:16:40.861 Optional Asynchronous Events Supported 00:16:40.861 Namespace Attribute Notices: Not Supported 00:16:40.861 Firmware Activation Notices: Not Supported 00:16:40.861 ANA Change Notices: Not Supported 00:16:40.861 PLE Aggregate Log Change Notices: Not Supported 00:16:40.861 LBA Status Info Alert Notices: Not Supported 00:16:40.861 EGE Aggregate Log Change Notices: Not Supported 00:16:40.861 Normal NVM Subsystem Shutdown event: Not Supported 00:16:40.861 Zone Descriptor Change Notices: Not Supported 00:16:40.861 Discovery Log Change Notices: Supported 00:16:40.861 Controller Attributes 00:16:40.861 128-bit Host Identifier: Not Supported 00:16:40.861 Non-Operational Permissive Mode: Not Supported 00:16:40.861 NVM Sets: Not Supported 00:16:40.861 Read Recovery Levels: Not Supported 00:16:40.861 Endurance Groups: Not Supported 00:16:40.861 Predictable Latency Mode: Not Supported 00:16:40.861 Traffic Based Keep ALive: Not Supported 00:16:40.861 Namespace Granularity: Not Supported 00:16:40.861 SQ Associations: Not Supported 00:16:40.861 UUID List: Not Supported 00:16:40.862 Multi-Domain Subsystem: Not Supported 00:16:40.862 Fixed Capacity Management: Not Supported 00:16:40.862 Variable Capacity Management: Not Supported 00:16:40.862 Delete Endurance Group: Not Supported 00:16:40.862 Delete NVM Set: Not Supported 00:16:40.862 Extended LBA Formats Supported: Not Supported 00:16:40.862 Flexible Data Placement Supported: Not Supported 00:16:40.862 00:16:40.862 Controller Memory Buffer Support 00:16:40.862 ================================ 00:16:40.862 Supported: No 00:16:40.862 00:16:40.862 Persistent Memory Region Support 00:16:40.862 ================================ 00:16:40.862 Supported: No 00:16:40.862 00:16:40.862 Admin Command Set Attributes 00:16:40.862 ============================ 00:16:40.862 Security Send/Receive: Not Supported 00:16:40.862 Format NVM: Not Supported 00:16:40.862 Firmware Activate/Download: Not Supported 00:16:40.862 Namespace Management: Not Supported 00:16:40.862 Device Self-Test: Not Supported 00:16:40.862 Directives: Not Supported 00:16:40.862 NVMe-MI: Not Supported 00:16:40.862 Virtualization Management: Not Supported 00:16:40.862 Doorbell Buffer Config: Not Supported 00:16:40.862 Get LBA Status Capability: Not Supported 00:16:40.862 Command & Feature Lockdown Capability: Not Supported 00:16:40.862 Abort Command Limit: 1 00:16:40.862 Async Event Request Limit: 4 00:16:40.862 Number of Firmware Slots: N/A 00:16:40.862 Firmware Slot 1 Read-Only: N/A 00:16:40.862 Firmware Activation Without Reset: N/A 00:16:40.862 Multiple Update Detection Support: N/A 00:16:40.862 Firmware Update Granularity: No Information Provided 00:16:40.862 Per-Namespace SMART Log: No 00:16:40.862 Asymmetric Namespace Access Log Page: Not Supported 00:16:40.862 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:40.862 Command Effects Log Page: Not Supported 00:16:40.862 Get Log Page Extended Data: Supported 00:16:40.862 Telemetry Log Pages: Not Supported 00:16:40.862 Persistent Event Log Pages: Not Supported 00:16:40.862 Supported Log Pages Log Page: May Support 00:16:40.862 Commands Supported & Effects Log Page: Not Supported 00:16:40.862 Feature Identifiers & Effects Log Page:May Support 00:16:40.862 NVMe-MI Commands & Effects Log Page: May Support 00:16:40.862 Data Area 4 for Telemetry Log: Not Supported 00:16:40.862 Error Log Page Entries Supported: 128 00:16:40.862 Keep Alive: Not Supported 00:16:40.862 00:16:40.862 NVM Command Set Attributes 00:16:40.862 ========================== 00:16:40.862 Submission Queue Entry Size 00:16:40.862 Max: 1 00:16:40.862 Min: 1 00:16:40.862 Completion Queue Entry Size 00:16:40.862 Max: 1 00:16:40.862 Min: 1 00:16:40.862 Number of Namespaces: 0 00:16:40.862 Compare Command: Not Supported 00:16:40.862 Write Uncorrectable Command: Not Supported 00:16:40.862 Dataset Management Command: Not Supported 00:16:40.862 Write Zeroes Command: Not Supported 00:16:40.862 Set Features Save Field: Not Supported 00:16:40.862 Reservations: Not Supported 00:16:40.862 Timestamp: Not Supported 00:16:40.862 Copy: Not Supported 00:16:40.862 Volatile Write Cache: Not Present 00:16:40.862 Atomic Write Unit (Normal): 1 00:16:40.862 Atomic Write Unit (PFail): 1 00:16:40.862 Atomic Compare & Write Unit: 1 00:16:40.862 Fused Compare & Write: Supported 00:16:40.862 Scatter-Gather List 00:16:40.862 SGL Command Set: Supported 00:16:40.862 SGL Keyed: Supported 00:16:40.862 SGL Bit Bucket Descriptor: Not Supported 00:16:40.862 SGL Metadata Pointer: Not Supported 00:16:40.862 Oversized SGL: Not Supported 00:16:40.862 SGL Metadata Address: Not Supported 00:16:40.862 SGL Offset: Supported 00:16:40.862 Transport SGL Data Block: Not Supported 00:16:40.862 Replay Protected Memory Block: Not Supported 00:16:40.862 00:16:40.862 Firmware Slot Information 00:16:40.862 ========================= 00:16:40.862 Active slot: 0 00:16:40.862 00:16:40.862 00:16:40.862 Error Log 00:16:40.862 ========= 00:16:40.862 00:16:40.862 Active Namespaces 00:16:40.862 ================= 00:16:40.862 Discovery Log Page 00:16:40.862 ================== 00:16:40.862 Generation Counter: 2 00:16:40.862 Number of Records: 2 00:16:40.862 Record Format: 0 00:16:40.862 00:16:40.862 Discovery Log Entry 0 00:16:40.862 ---------------------- 00:16:40.862 Transport Type: 3 (TCP) 00:16:40.862 Address Family: 1 (IPv4) 00:16:40.862 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:40.862 Entry Flags: 00:16:40.862 Duplicate Returned Information: 1 00:16:40.862 Explicit Persistent Connection Support for Discovery: 1 00:16:40.862 Transport Requirements: 00:16:40.862 Secure Channel: Not Required 00:16:40.862 Port ID: 0 (0x0000) 00:16:40.862 Controller ID: 65535 (0xffff) 00:16:40.862 Admin Max SQ Size: 128 00:16:40.862 Transport Service Identifier: 4420 00:16:40.862 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:40.862 Transport Address: 10.0.0.3 00:16:40.862 Discovery Log Entry 1 00:16:40.862 ---------------------- 00:16:40.862 Transport Type: 3 (TCP) 00:16:40.862 Address Family: 1 (IPv4) 00:16:40.862 Subsystem Type: 2 (NVM Subsystem) 00:16:40.862 Entry Flags: 00:16:40.862 Duplicate Returned Information: 0 00:16:40.862 Explicit Persistent Connection Support for Discovery: 0 00:16:40.862 Transport Requirements: 00:16:40.862 Secure Channel: Not Required 00:16:40.862 Port ID: 0 (0x0000) 00:16:40.862 Controller ID: 65535 (0xffff) 00:16:40.862 Admin Max SQ Size: 128 00:16:40.862 Transport Service Identifier: 4420 00:16:40.862 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:16:40.862 Transport Address: 10.0.0.3 [2024-11-19 10:10:54.693516] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:40.862 [2024-11-19 10:10:54.693521] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:40.862 [2024-11-19 10:10:54.693536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.862 [2024-11-19 10:10:54.693544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.862 [2024-11-19 10:10:54.693547] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.862 [2024-11-19 10:10:54.693552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01d40) on tqpair=0x1c9d750 00:16:40.862 [2024-11-19 10:10:54.693645] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:16:40.862 [2024-11-19 10:10:54.693659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01740) on tqpair=0x1c9d750 00:16:40.862 [2024-11-19 10:10:54.693667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:40.862 [2024-11-19 10:10:54.693673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d018c0) on tqpair=0x1c9d750 00:16:40.862 [2024-11-19 10:10:54.693678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:40.862 [2024-11-19 10:10:54.693684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01a40) on tqpair=0x1c9d750 00:16:40.862 [2024-11-19 10:10:54.693689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:40.862 [2024-11-19 10:10:54.693694] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01bc0) on tqpair=0x1c9d750 00:16:40.862 [2024-11-19 10:10:54.693699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:40.862 [2024-11-19 10:10:54.693709] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.862 [2024-11-19 10:10:54.693715] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.862 [2024-11-19 10:10:54.693719] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d750) 00:16:40.862 [2024-11-19 10:10:54.693727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.862 [2024-11-19 10:10:54.693750] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01bc0, cid 3, qid 0 00:16:40.862 [2024-11-19 10:10:54.693800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.862 [2024-11-19 10:10:54.693808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.862 [2024-11-19 10:10:54.693812] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.862 [2024-11-19 10:10:54.693816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01bc0) on tqpair=0x1c9d750 00:16:40.862 [2024-11-19 10:10:54.693825] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.862 [2024-11-19 10:10:54.693831] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.862 [2024-11-19 10:10:54.693835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d750) 00:16:40.862 [2024-11-19 10:10:54.693842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.862 [2024-11-19 10:10:54.693865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01bc0, cid 3, qid 0 00:16:40.862 [2024-11-19 10:10:54.693948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.862 [2024-11-19 10:10:54.693957] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.862 [2024-11-19 10:10:54.693961] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.693965] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01bc0) on tqpair=0x1c9d750 00:16:40.863 [2024-11-19 10:10:54.693970] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:16:40.863 [2024-11-19 10:10:54.693976] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:16:40.863 [2024-11-19 10:10:54.693987] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.693993] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.693997] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d750) 00:16:40.863 [2024-11-19 10:10:54.694005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.863 [2024-11-19 10:10:54.694026] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01bc0, cid 3, qid 0 00:16:40.863 [2024-11-19 10:10:54.694072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.863 [2024-11-19 10:10:54.694079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.863 [2024-11-19 10:10:54.694083] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01bc0) on tqpair=0x1c9d750 00:16:40.863 [2024-11-19 10:10:54.694099] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694105] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694109] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d750) 00:16:40.863 [2024-11-19 10:10:54.694116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.863 [2024-11-19 10:10:54.694135] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01bc0, cid 3, qid 0 00:16:40.863 [2024-11-19 10:10:54.694186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.863 [2024-11-19 10:10:54.694193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.863 [2024-11-19 10:10:54.694197] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01bc0) on tqpair=0x1c9d750 00:16:40.863 [2024-11-19 10:10:54.694212] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d750) 00:16:40.863 [2024-11-19 10:10:54.694229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.863 [2024-11-19 10:10:54.694248] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01bc0, cid 3, qid 0 00:16:40.863 [2024-11-19 10:10:54.694300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.863 [2024-11-19 10:10:54.694307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.863 [2024-11-19 10:10:54.694311] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694315] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01bc0) on tqpair=0x1c9d750 00:16:40.863 [2024-11-19 10:10:54.694326] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694332] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694336] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d750) 00:16:40.863 [2024-11-19 10:10:54.694343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.863 [2024-11-19 10:10:54.694362] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01bc0, cid 3, qid 0 00:16:40.863 [2024-11-19 10:10:54.694413] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.863 [2024-11-19 10:10:54.694420] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.863 [2024-11-19 10:10:54.694424] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694428] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01bc0) on tqpair=0x1c9d750 00:16:40.863 [2024-11-19 10:10:54.694439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694445] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694449] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d750) 00:16:40.863 [2024-11-19 10:10:54.694457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.863 [2024-11-19 10:10:54.694475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01bc0, cid 3, qid 0 00:16:40.863 [2024-11-19 10:10:54.694517] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.863 [2024-11-19 10:10:54.694524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.863 [2024-11-19 10:10:54.694528] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694532] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01bc0) on tqpair=0x1c9d750 00:16:40.863 [2024-11-19 10:10:54.694544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d750) 00:16:40.863 [2024-11-19 10:10:54.694561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.863 [2024-11-19 10:10:54.694579] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01bc0, cid 3, qid 0 00:16:40.863 [2024-11-19 10:10:54.694628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.863 [2024-11-19 10:10:54.694635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.863 [2024-11-19 10:10:54.694639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694643] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01bc0) on tqpair=0x1c9d750 00:16:40.863 [2024-11-19 10:10:54.694655] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d750) 00:16:40.863 [2024-11-19 10:10:54.694672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.863 [2024-11-19 10:10:54.694690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01bc0, cid 3, qid 0 00:16:40.863 [2024-11-19 10:10:54.694738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.863 [2024-11-19 10:10:54.694745] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.863 [2024-11-19 10:10:54.694749] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694753] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01bc0) on tqpair=0x1c9d750 00:16:40.863 [2024-11-19 10:10:54.694764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d750) 00:16:40.863 [2024-11-19 10:10:54.694781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.863 [2024-11-19 10:10:54.694800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01bc0, cid 3, qid 0 00:16:40.863 [2024-11-19 10:10:54.694850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.863 [2024-11-19 10:10:54.694857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.863 [2024-11-19 10:10:54.694861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694865] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01bc0) on tqpair=0x1c9d750 00:16:40.863 [2024-11-19 10:10:54.694877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.694886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d750) 00:16:40.863 [2024-11-19 10:10:54.694894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.863 [2024-11-19 10:10:54.694912] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01bc0, cid 3, qid 0 00:16:40.863 [2024-11-19 10:10:54.698959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.863 [2024-11-19 10:10:54.698968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.863 [2024-11-19 10:10:54.698973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.698977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01bc0) on tqpair=0x1c9d750 00:16:40.863 [2024-11-19 10:10:54.698992] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.698999] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.699003] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d750) 00:16:40.863 [2024-11-19 10:10:54.699012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.863 [2024-11-19 10:10:54.699037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01bc0, cid 3, qid 0 00:16:40.863 [2024-11-19 10:10:54.699098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:40.863 [2024-11-19 10:10:54.699105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:40.863 [2024-11-19 10:10:54.699109] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:40.863 [2024-11-19 10:10:54.699113] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01bc0) on tqpair=0x1c9d750 00:16:40.864 [2024-11-19 10:10:54.699123] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:16:40.864 00:16:40.864 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:16:40.864 [2024-11-19 10:10:54.740087] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:16:40.864 [2024-11-19 10:10:54.740144] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74186 ] 00:16:41.125 [2024-11-19 10:10:54.909686] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:16:41.125 [2024-11-19 10:10:54.909766] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:41.125 [2024-11-19 10:10:54.909776] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:41.125 [2024-11-19 10:10:54.909793] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:41.125 [2024-11-19 10:10:54.909806] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:41.125 [2024-11-19 10:10:54.910257] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:16:41.125 [2024-11-19 10:10:54.910349] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x227b750 0 00:16:41.125 [2024-11-19 10:10:54.914953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:41.125 [2024-11-19 10:10:54.914987] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:41.125 [2024-11-19 10:10:54.914995] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:41.125 [2024-11-19 10:10:54.914999] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:41.125 [2024-11-19 10:10:54.915048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.125 [2024-11-19 10:10:54.915058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.125 [2024-11-19 10:10:54.915063] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x227b750) 00:16:41.125 [2024-11-19 10:10:54.915081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:41.125 [2024-11-19 10:10:54.915123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22df740, cid 0, qid 0 00:16:41.125 [2024-11-19 10:10:54.922944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.125 [2024-11-19 10:10:54.922975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.125 [2024-11-19 10:10:54.922982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.125 [2024-11-19 10:10:54.922988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22df740) on tqpair=0x227b750 00:16:41.125 [2024-11-19 10:10:54.923009] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:41.125 [2024-11-19 10:10:54.923020] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:16:41.125 [2024-11-19 10:10:54.923029] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:16:41.125 [2024-11-19 10:10:54.923049] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.125 [2024-11-19 10:10:54.923057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.125 [2024-11-19 10:10:54.923062] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x227b750) 00:16:41.125 [2024-11-19 10:10:54.923074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.125 [2024-11-19 10:10:54.923109] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22df740, cid 0, qid 0 00:16:41.125 [2024-11-19 10:10:54.923173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.125 [2024-11-19 10:10:54.923182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.125 [2024-11-19 10:10:54.923187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.125 [2024-11-19 10:10:54.923192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22df740) on tqpair=0x227b750 00:16:41.125 [2024-11-19 10:10:54.923200] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:16:41.125 [2024-11-19 10:10:54.923210] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:16:41.125 [2024-11-19 10:10:54.923221] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.125 [2024-11-19 10:10:54.923226] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.125 [2024-11-19 10:10:54.923231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x227b750) 00:16:41.125 [2024-11-19 10:10:54.923241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.125 [2024-11-19 10:10:54.923265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22df740, cid 0, qid 0 00:16:41.125 [2024-11-19 10:10:54.923577] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.125 [2024-11-19 10:10:54.923597] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.125 [2024-11-19 10:10:54.923602] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.125 [2024-11-19 10:10:54.923608] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22df740) on tqpair=0x227b750 00:16:41.125 [2024-11-19 10:10:54.923616] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:16:41.125 [2024-11-19 10:10:54.923628] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:16:41.125 [2024-11-19 10:10:54.923638] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.125 [2024-11-19 10:10:54.923644] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.125 [2024-11-19 10:10:54.923649] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x227b750) 00:16:41.125 [2024-11-19 10:10:54.923659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.125 [2024-11-19 10:10:54.923683] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22df740, cid 0, qid 0 00:16:41.125 [2024-11-19 10:10:54.923728] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.125 [2024-11-19 10:10:54.923737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.125 [2024-11-19 10:10:54.923741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.125 [2024-11-19 10:10:54.923747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22df740) on tqpair=0x227b750 00:16:41.125 [2024-11-19 10:10:54.923754] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:41.125 [2024-11-19 10:10:54.923768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.125 [2024-11-19 10:10:54.923773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.125 [2024-11-19 10:10:54.923778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x227b750) 00:16:41.125 [2024-11-19 10:10:54.923788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.125 [2024-11-19 10:10:54.923809] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22df740, cid 0, qid 0 00:16:41.125 [2024-11-19 10:10:54.923907] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.125 [2024-11-19 10:10:54.923936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.126 [2024-11-19 10:10:54.923943] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.923949] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22df740) on tqpair=0x227b750 00:16:41.126 [2024-11-19 10:10:54.923955] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:16:41.126 [2024-11-19 10:10:54.923963] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:16:41.126 [2024-11-19 10:10:54.923975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:41.126 [2024-11-19 10:10:54.924091] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:16:41.126 [2024-11-19 10:10:54.924100] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:41.126 [2024-11-19 10:10:54.924112] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.924118] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.924123] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x227b750) 00:16:41.126 [2024-11-19 10:10:54.924133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.126 [2024-11-19 10:10:54.924162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22df740, cid 0, qid 0 00:16:41.126 [2024-11-19 10:10:54.924558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.126 [2024-11-19 10:10:54.924577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.126 [2024-11-19 10:10:54.924594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.924599] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22df740) on tqpair=0x227b750 00:16:41.126 [2024-11-19 10:10:54.924607] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:41.126 [2024-11-19 10:10:54.924622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.924628] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.924633] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x227b750) 00:16:41.126 [2024-11-19 10:10:54.924642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.126 [2024-11-19 10:10:54.924667] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22df740, cid 0, qid 0 00:16:41.126 [2024-11-19 10:10:54.924719] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.126 [2024-11-19 10:10:54.924728] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.126 [2024-11-19 10:10:54.924732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.924738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22df740) on tqpair=0x227b750 00:16:41.126 [2024-11-19 10:10:54.924744] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:41.126 [2024-11-19 10:10:54.924751] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:16:41.126 [2024-11-19 10:10:54.924762] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:16:41.126 [2024-11-19 10:10:54.924782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:16:41.126 [2024-11-19 10:10:54.924795] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.924801] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x227b750) 00:16:41.126 [2024-11-19 10:10:54.924811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.126 [2024-11-19 10:10:54.924835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22df740, cid 0, qid 0 00:16:41.126 [2024-11-19 10:10:54.925277] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:41.126 [2024-11-19 10:10:54.925296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:41.126 [2024-11-19 10:10:54.925302] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.925308] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x227b750): datao=0, datal=4096, cccid=0 00:16:41.126 [2024-11-19 10:10:54.925315] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22df740) on tqpair(0x227b750): expected_datao=0, payload_size=4096 00:16:41.126 [2024-11-19 10:10:54.925321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.925333] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.925339] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.925351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.126 [2024-11-19 10:10:54.925359] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.126 [2024-11-19 10:10:54.925363] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.925369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22df740) on tqpair=0x227b750 00:16:41.126 [2024-11-19 10:10:54.925381] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:16:41.126 [2024-11-19 10:10:54.925388] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:16:41.126 [2024-11-19 10:10:54.925394] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:16:41.126 [2024-11-19 10:10:54.925400] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:16:41.126 [2024-11-19 10:10:54.925406] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:16:41.126 [2024-11-19 10:10:54.925413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:16:41.126 [2024-11-19 10:10:54.925431] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:16:41.126 [2024-11-19 10:10:54.925442] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.925448] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.925453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x227b750) 00:16:41.126 [2024-11-19 10:10:54.925464] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:41.126 [2024-11-19 10:10:54.925492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22df740, cid 0, qid 0 00:16:41.126 [2024-11-19 10:10:54.925889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.126 [2024-11-19 10:10:54.925907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.126 [2024-11-19 10:10:54.925925] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.925932] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22df740) on tqpair=0x227b750 00:16:41.126 [2024-11-19 10:10:54.925943] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.925949] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.925954] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x227b750) 00:16:41.126 [2024-11-19 10:10:54.925963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.126 [2024-11-19 10:10:54.925972] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.925977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.925982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x227b750) 00:16:41.126 [2024-11-19 10:10:54.925989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.126 [2024-11-19 10:10:54.925998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.926003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.926008] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x227b750) 00:16:41.126 [2024-11-19 10:10:54.926015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.126 [2024-11-19 10:10:54.926023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.926028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.926033] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x227b750) 00:16:41.126 [2024-11-19 10:10:54.926041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.126 [2024-11-19 10:10:54.926048] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:41.126 [2024-11-19 10:10:54.926066] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:41.126 [2024-11-19 10:10:54.926077] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.926082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x227b750) 00:16:41.126 [2024-11-19 10:10:54.926091] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.126 [2024-11-19 10:10:54.926120] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22df740, cid 0, qid 0 00:16:41.126 [2024-11-19 10:10:54.926129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22df8c0, cid 1, qid 0 00:16:41.126 [2024-11-19 10:10:54.926136] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfa40, cid 2, qid 0 00:16:41.126 [2024-11-19 10:10:54.926142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfbc0, cid 3, qid 0 00:16:41.126 [2024-11-19 10:10:54.926148] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfd40, cid 4, qid 0 00:16:41.126 [2024-11-19 10:10:54.926583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.126 [2024-11-19 10:10:54.926600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.126 [2024-11-19 10:10:54.926606] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.126 [2024-11-19 10:10:54.926611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfd40) on tqpair=0x227b750 00:16:41.126 [2024-11-19 10:10:54.926619] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:16:41.126 [2024-11-19 10:10:54.926626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:41.126 [2024-11-19 10:10:54.926638] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:16:41.126 [2024-11-19 10:10:54.926653] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:41.127 [2024-11-19 10:10:54.926663] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.926669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.926674] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x227b750) 00:16:41.127 [2024-11-19 10:10:54.926683] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:41.127 [2024-11-19 10:10:54.926708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfd40, cid 4, qid 0 00:16:41.127 [2024-11-19 10:10:54.926765] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.127 [2024-11-19 10:10:54.926774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.127 [2024-11-19 10:10:54.926779] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.926784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfd40) on tqpair=0x227b750 00:16:41.127 [2024-11-19 10:10:54.926871] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:16:41.127 [2024-11-19 10:10:54.926886] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:41.127 [2024-11-19 10:10:54.926898] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.926903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x227b750) 00:16:41.127 [2024-11-19 10:10:54.930944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.127 [2024-11-19 10:10:54.931051] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfd40, cid 4, qid 0 00:16:41.127 [2024-11-19 10:10:54.931127] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:41.127 [2024-11-19 10:10:54.931144] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:41.127 [2024-11-19 10:10:54.931153] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.931161] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x227b750): datao=0, datal=4096, cccid=4 00:16:41.127 [2024-11-19 10:10:54.931171] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22dfd40) on tqpair(0x227b750): expected_datao=0, payload_size=4096 00:16:41.127 [2024-11-19 10:10:54.931180] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.931197] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.931206] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.931223] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.127 [2024-11-19 10:10:54.931235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.127 [2024-11-19 10:10:54.931243] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.931252] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfd40) on tqpair=0x227b750 00:16:41.127 [2024-11-19 10:10:54.931287] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:16:41.127 [2024-11-19 10:10:54.931313] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:16:41.127 [2024-11-19 10:10:54.931337] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:16:41.127 [2024-11-19 10:10:54.931357] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.931367] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x227b750) 00:16:41.127 [2024-11-19 10:10:54.931384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.127 [2024-11-19 10:10:54.931430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfd40, cid 4, qid 0 00:16:41.127 [2024-11-19 10:10:54.931774] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:41.127 [2024-11-19 10:10:54.931801] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:41.127 [2024-11-19 10:10:54.931808] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.931813] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x227b750): datao=0, datal=4096, cccid=4 00:16:41.127 [2024-11-19 10:10:54.931820] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22dfd40) on tqpair(0x227b750): expected_datao=0, payload_size=4096 00:16:41.127 [2024-11-19 10:10:54.931826] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.931837] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.931842] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.931854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.127 [2024-11-19 10:10:54.931863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.127 [2024-11-19 10:10:54.931867] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.931873] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfd40) on tqpair=0x227b750 00:16:41.127 [2024-11-19 10:10:54.931902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:41.127 [2024-11-19 10:10:54.931939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:41.127 [2024-11-19 10:10:54.931955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.931961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x227b750) 00:16:41.127 [2024-11-19 10:10:54.931972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.127 [2024-11-19 10:10:54.932008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfd40, cid 4, qid 0 00:16:41.127 [2024-11-19 10:10:54.932495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:41.127 [2024-11-19 10:10:54.932516] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:41.127 [2024-11-19 10:10:54.932522] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.932527] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x227b750): datao=0, datal=4096, cccid=4 00:16:41.127 [2024-11-19 10:10:54.932534] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22dfd40) on tqpair(0x227b750): expected_datao=0, payload_size=4096 00:16:41.127 [2024-11-19 10:10:54.932540] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.932550] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.932556] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.932567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.127 [2024-11-19 10:10:54.932575] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.127 [2024-11-19 10:10:54.932580] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.932586] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfd40) on tqpair=0x227b750 00:16:41.127 [2024-11-19 10:10:54.932599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:41.127 [2024-11-19 10:10:54.932611] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:16:41.127 [2024-11-19 10:10:54.932626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:16:41.127 [2024-11-19 10:10:54.932635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:41.127 [2024-11-19 10:10:54.932642] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:41.127 [2024-11-19 10:10:54.932649] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:16:41.127 [2024-11-19 10:10:54.932657] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:16:41.127 [2024-11-19 10:10:54.932663] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:16:41.127 [2024-11-19 10:10:54.932670] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:16:41.127 [2024-11-19 10:10:54.932694] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.932700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x227b750) 00:16:41.127 [2024-11-19 10:10:54.932712] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.127 [2024-11-19 10:10:54.932722] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.932727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.932732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x227b750) 00:16:41.127 [2024-11-19 10:10:54.932740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.127 [2024-11-19 10:10:54.932778] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfd40, cid 4, qid 0 00:16:41.127 [2024-11-19 10:10:54.932788] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfec0, cid 5, qid 0 00:16:41.127 [2024-11-19 10:10:54.933086] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.127 [2024-11-19 10:10:54.933105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.127 [2024-11-19 10:10:54.933122] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.127 [2024-11-19 10:10:54.933128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfd40) on tqpair=0x227b750 00:16:41.128 [2024-11-19 10:10:54.933137] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.128 [2024-11-19 10:10:54.933145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.128 [2024-11-19 10:10:54.933150] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.933155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfec0) on tqpair=0x227b750 00:16:41.128 [2024-11-19 10:10:54.933170] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.933176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x227b750) 00:16:41.128 [2024-11-19 10:10:54.933186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.128 [2024-11-19 10:10:54.933212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfec0, cid 5, qid 0 00:16:41.128 [2024-11-19 10:10:54.933454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.128 [2024-11-19 10:10:54.933474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.128 [2024-11-19 10:10:54.933480] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.933486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfec0) on tqpair=0x227b750 00:16:41.128 [2024-11-19 10:10:54.933501] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.933506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x227b750) 00:16:41.128 [2024-11-19 10:10:54.933516] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.128 [2024-11-19 10:10:54.933542] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfec0, cid 5, qid 0 00:16:41.128 [2024-11-19 10:10:54.933781] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.128 [2024-11-19 10:10:54.933798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.128 [2024-11-19 10:10:54.933804] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.933809] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfec0) on tqpair=0x227b750 00:16:41.128 [2024-11-19 10:10:54.933823] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.933829] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x227b750) 00:16:41.128 [2024-11-19 10:10:54.933839] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.128 [2024-11-19 10:10:54.933861] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfec0, cid 5, qid 0 00:16:41.128 [2024-11-19 10:10:54.933928] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.128 [2024-11-19 10:10:54.933938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.128 [2024-11-19 10:10:54.933943] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.933948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfec0) on tqpair=0x227b750 00:16:41.128 [2024-11-19 10:10:54.933975] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.933982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x227b750) 00:16:41.128 [2024-11-19 10:10:54.933992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.128 [2024-11-19 10:10:54.934002] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.934007] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x227b750) 00:16:41.128 [2024-11-19 10:10:54.934016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.128 [2024-11-19 10:10:54.934026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.934031] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x227b750) 00:16:41.128 [2024-11-19 10:10:54.934039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.128 [2024-11-19 10:10:54.934050] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.934056] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x227b750) 00:16:41.128 [2024-11-19 10:10:54.934064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.128 [2024-11-19 10:10:54.934092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfec0, cid 5, qid 0 00:16:41.128 [2024-11-19 10:10:54.934101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfd40, cid 4, qid 0 00:16:41.128 [2024-11-19 10:10:54.934108] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e0040, cid 6, qid 0 00:16:41.128 [2024-11-19 10:10:54.934114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e01c0, cid 7, qid 0 00:16:41.128 [2024-11-19 10:10:54.934556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:41.128 [2024-11-19 10:10:54.934577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:41.128 [2024-11-19 10:10:54.934583] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.934588] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x227b750): datao=0, datal=8192, cccid=5 00:16:41.128 [2024-11-19 10:10:54.934594] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22dfec0) on tqpair(0x227b750): expected_datao=0, payload_size=8192 00:16:41.128 [2024-11-19 10:10:54.934600] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.934623] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.934629] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.934637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:41.128 [2024-11-19 10:10:54.934645] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:41.128 [2024-11-19 10:10:54.934650] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.934655] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x227b750): datao=0, datal=512, cccid=4 00:16:41.128 [2024-11-19 10:10:54.934661] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22dfd40) on tqpair(0x227b750): expected_datao=0, payload_size=512 00:16:41.128 [2024-11-19 10:10:54.934668] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.934676] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.934681] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.934688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:41.128 [2024-11-19 10:10:54.934696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:41.128 [2024-11-19 10:10:54.934700] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.934705] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x227b750): datao=0, datal=512, cccid=6 00:16:41.128 [2024-11-19 10:10:54.934711] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22e0040) on tqpair(0x227b750): expected_datao=0, payload_size=512 00:16:41.128 [2024-11-19 10:10:54.934717] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.934725] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.934730] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.934738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:41.128 [2024-11-19 10:10:54.934745] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:41.128 [2024-11-19 10:10:54.934749] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.934754] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x227b750): datao=0, datal=4096, cccid=7 00:16:41.128 [2024-11-19 10:10:54.934760] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22e01c0) on tqpair(0x227b750): expected_datao=0, payload_size=4096 00:16:41.128 [2024-11-19 10:10:54.934766] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.934775] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.934779] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.934787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.128 [2024-11-19 10:10:54.934794] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.128 [2024-11-19 10:10:54.934799] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.934804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfec0) on tqpair=0x227b750 00:16:41.128 [2024-11-19 10:10:54.934826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.128 [2024-11-19 10:10:54.934835] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.128 [2024-11-19 10:10:54.934839] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.934845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfd40) on tqpair=0x227b750 00:16:41.128 [2024-11-19 10:10:54.934861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.128 [2024-11-19 10:10:54.934869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.128 [2024-11-19 10:10:54.934873] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.934879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e0040) on tqpair=0x227b750 00:16:41.128 [2024-11-19 10:10:54.934888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.128 [2024-11-19 10:10:54.934896] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.128 [2024-11-19 10:10:54.934901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.128 [2024-11-19 10:10:54.934906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e01c0) on tqpair=0x227b750 00:16:41.128 ===================================================== 00:16:41.128 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:41.128 ===================================================== 00:16:41.128 Controller Capabilities/Features 00:16:41.128 ================================ 00:16:41.128 Vendor ID: 8086 00:16:41.128 Subsystem Vendor ID: 8086 00:16:41.128 Serial Number: SPDK00000000000001 00:16:41.128 Model Number: SPDK bdev Controller 00:16:41.128 Firmware Version: 25.01 00:16:41.128 Recommended Arb Burst: 6 00:16:41.128 IEEE OUI Identifier: e4 d2 5c 00:16:41.128 Multi-path I/O 00:16:41.128 May have multiple subsystem ports: Yes 00:16:41.128 May have multiple controllers: Yes 00:16:41.128 Associated with SR-IOV VF: No 00:16:41.128 Max Data Transfer Size: 131072 00:16:41.128 Max Number of Namespaces: 32 00:16:41.128 Max Number of I/O Queues: 127 00:16:41.128 NVMe Specification Version (VS): 1.3 00:16:41.128 NVMe Specification Version (Identify): 1.3 00:16:41.128 Maximum Queue Entries: 128 00:16:41.128 Contiguous Queues Required: Yes 00:16:41.129 Arbitration Mechanisms Supported 00:16:41.129 Weighted Round Robin: Not Supported 00:16:41.129 Vendor Specific: Not Supported 00:16:41.129 Reset Timeout: 15000 ms 00:16:41.129 Doorbell Stride: 4 bytes 00:16:41.129 NVM Subsystem Reset: Not Supported 00:16:41.129 Command Sets Supported 00:16:41.129 NVM Command Set: Supported 00:16:41.129 Boot Partition: Not Supported 00:16:41.129 Memory Page Size Minimum: 4096 bytes 00:16:41.129 Memory Page Size Maximum: 4096 bytes 00:16:41.129 Persistent Memory Region: Not Supported 00:16:41.129 Optional Asynchronous Events Supported 00:16:41.129 Namespace Attribute Notices: Supported 00:16:41.129 Firmware Activation Notices: Not Supported 00:16:41.129 ANA Change Notices: Not Supported 00:16:41.129 PLE Aggregate Log Change Notices: Not Supported 00:16:41.129 LBA Status Info Alert Notices: Not Supported 00:16:41.129 EGE Aggregate Log Change Notices: Not Supported 00:16:41.129 Normal NVM Subsystem Shutdown event: Not Supported 00:16:41.129 Zone Descriptor Change Notices: Not Supported 00:16:41.129 Discovery Log Change Notices: Not Supported 00:16:41.129 Controller Attributes 00:16:41.129 128-bit Host Identifier: Supported 00:16:41.129 Non-Operational Permissive Mode: Not Supported 00:16:41.129 NVM Sets: Not Supported 00:16:41.129 Read Recovery Levels: Not Supported 00:16:41.129 Endurance Groups: Not Supported 00:16:41.129 Predictable Latency Mode: Not Supported 00:16:41.129 Traffic Based Keep ALive: Not Supported 00:16:41.129 Namespace Granularity: Not Supported 00:16:41.129 SQ Associations: Not Supported 00:16:41.129 UUID List: Not Supported 00:16:41.129 Multi-Domain Subsystem: Not Supported 00:16:41.129 Fixed Capacity Management: Not Supported 00:16:41.129 Variable Capacity Management: Not Supported 00:16:41.129 Delete Endurance Group: Not Supported 00:16:41.129 Delete NVM Set: Not Supported 00:16:41.129 Extended LBA Formats Supported: Not Supported 00:16:41.129 Flexible Data Placement Supported: Not Supported 00:16:41.129 00:16:41.129 Controller Memory Buffer Support 00:16:41.129 ================================ 00:16:41.129 Supported: No 00:16:41.129 00:16:41.129 Persistent Memory Region Support 00:16:41.129 ================================ 00:16:41.129 Supported: No 00:16:41.129 00:16:41.129 Admin Command Set Attributes 00:16:41.129 ============================ 00:16:41.129 Security Send/Receive: Not Supported 00:16:41.129 Format NVM: Not Supported 00:16:41.129 Firmware Activate/Download: Not Supported 00:16:41.129 Namespace Management: Not Supported 00:16:41.129 Device Self-Test: Not Supported 00:16:41.129 Directives: Not Supported 00:16:41.129 NVMe-MI: Not Supported 00:16:41.129 Virtualization Management: Not Supported 00:16:41.129 Doorbell Buffer Config: Not Supported 00:16:41.129 Get LBA Status Capability: Not Supported 00:16:41.129 Command & Feature Lockdown Capability: Not Supported 00:16:41.129 Abort Command Limit: 4 00:16:41.129 Async Event Request Limit: 4 00:16:41.129 Number of Firmware Slots: N/A 00:16:41.129 Firmware Slot 1 Read-Only: N/A 00:16:41.129 Firmware Activation Without Reset: N/A 00:16:41.129 Multiple Update Detection Support: N/A 00:16:41.129 Firmware Update Granularity: No Information Provided 00:16:41.129 Per-Namespace SMART Log: No 00:16:41.129 Asymmetric Namespace Access Log Page: Not Supported 00:16:41.129 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:16:41.129 Command Effects Log Page: Supported 00:16:41.129 Get Log Page Extended Data: Supported 00:16:41.129 Telemetry Log Pages: Not Supported 00:16:41.129 Persistent Event Log Pages: Not Supported 00:16:41.129 Supported Log Pages Log Page: May Support 00:16:41.129 Commands Supported & Effects Log Page: Not Supported 00:16:41.129 Feature Identifiers & Effects Log Page:May Support 00:16:41.129 NVMe-MI Commands & Effects Log Page: May Support 00:16:41.129 Data Area 4 for Telemetry Log: Not Supported 00:16:41.129 Error Log Page Entries Supported: 128 00:16:41.129 Keep Alive: Supported 00:16:41.129 Keep Alive Granularity: 10000 ms 00:16:41.129 00:16:41.129 NVM Command Set Attributes 00:16:41.129 ========================== 00:16:41.129 Submission Queue Entry Size 00:16:41.129 Max: 64 00:16:41.129 Min: 64 00:16:41.129 Completion Queue Entry Size 00:16:41.129 Max: 16 00:16:41.129 Min: 16 00:16:41.129 Number of Namespaces: 32 00:16:41.129 Compare Command: Supported 00:16:41.129 Write Uncorrectable Command: Not Supported 00:16:41.129 Dataset Management Command: Supported 00:16:41.129 Write Zeroes Command: Supported 00:16:41.129 Set Features Save Field: Not Supported 00:16:41.129 Reservations: Supported 00:16:41.129 Timestamp: Not Supported 00:16:41.129 Copy: Supported 00:16:41.129 Volatile Write Cache: Present 00:16:41.129 Atomic Write Unit (Normal): 1 00:16:41.129 Atomic Write Unit (PFail): 1 00:16:41.129 Atomic Compare & Write Unit: 1 00:16:41.129 Fused Compare & Write: Supported 00:16:41.129 Scatter-Gather List 00:16:41.129 SGL Command Set: Supported 00:16:41.129 SGL Keyed: Supported 00:16:41.129 SGL Bit Bucket Descriptor: Not Supported 00:16:41.129 SGL Metadata Pointer: Not Supported 00:16:41.129 Oversized SGL: Not Supported 00:16:41.129 SGL Metadata Address: Not Supported 00:16:41.129 SGL Offset: Supported 00:16:41.129 Transport SGL Data Block: Not Supported 00:16:41.129 Replay Protected Memory Block: Not Supported 00:16:41.129 00:16:41.129 Firmware Slot Information 00:16:41.129 ========================= 00:16:41.129 Active slot: 1 00:16:41.129 Slot 1 Firmware Revision: 25.01 00:16:41.129 00:16:41.129 00:16:41.129 Commands Supported and Effects 00:16:41.129 ============================== 00:16:41.129 Admin Commands 00:16:41.129 -------------- 00:16:41.129 Get Log Page (02h): Supported 00:16:41.129 Identify (06h): Supported 00:16:41.129 Abort (08h): Supported 00:16:41.129 Set Features (09h): Supported 00:16:41.129 Get Features (0Ah): Supported 00:16:41.129 Asynchronous Event Request (0Ch): Supported 00:16:41.129 Keep Alive (18h): Supported 00:16:41.129 I/O Commands 00:16:41.129 ------------ 00:16:41.129 Flush (00h): Supported LBA-Change 00:16:41.129 Write (01h): Supported LBA-Change 00:16:41.129 Read (02h): Supported 00:16:41.129 Compare (05h): Supported 00:16:41.129 Write Zeroes (08h): Supported LBA-Change 00:16:41.129 Dataset Management (09h): Supported LBA-Change 00:16:41.129 Copy (19h): Supported LBA-Change 00:16:41.129 00:16:41.129 Error Log 00:16:41.129 ========= 00:16:41.129 00:16:41.129 Arbitration 00:16:41.129 =========== 00:16:41.129 Arbitration Burst: 1 00:16:41.129 00:16:41.129 Power Management 00:16:41.129 ================ 00:16:41.129 Number of Power States: 1 00:16:41.129 Current Power State: Power State #0 00:16:41.129 Power State #0: 00:16:41.129 Max Power: 0.00 W 00:16:41.129 Non-Operational State: Operational 00:16:41.129 Entry Latency: Not Reported 00:16:41.129 Exit Latency: Not Reported 00:16:41.129 Relative Read Throughput: 0 00:16:41.129 Relative Read Latency: 0 00:16:41.129 Relative Write Throughput: 0 00:16:41.129 Relative Write Latency: 0 00:16:41.129 Idle Power: Not Reported 00:16:41.129 Active Power: Not Reported 00:16:41.129 Non-Operational Permissive Mode: Not Supported 00:16:41.129 00:16:41.129 Health Information 00:16:41.129 ================== 00:16:41.129 Critical Warnings: 00:16:41.129 Available Spare Space: OK 00:16:41.129 Temperature: OK 00:16:41.130 Device Reliability: OK 00:16:41.130 Read Only: No 00:16:41.130 Volatile Memory Backup: OK 00:16:41.130 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:41.130 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:41.130 Available Spare: 0% 00:16:41.130 Available Spare Threshold: 0% 00:16:41.130 Life Percentage Used:[2024-11-19 10:10:54.939095] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.130 [2024-11-19 10:10:54.939110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x227b750) 00:16:41.130 [2024-11-19 10:10:54.939123] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.130 [2024-11-19 10:10:54.939162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e01c0, cid 7, qid 0 00:16:41.130 [2024-11-19 10:10:54.939227] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.130 [2024-11-19 10:10:54.939242] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.130 [2024-11-19 10:10:54.939251] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.130 [2024-11-19 10:10:54.939260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e01c0) on tqpair=0x227b750 00:16:41.130 [2024-11-19 10:10:54.939318] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:16:41.130 [2024-11-19 10:10:54.939334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22df740) on tqpair=0x227b750 00:16:41.130 [2024-11-19 10:10:54.939344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.130 [2024-11-19 10:10:54.939351] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22df8c0) on tqpair=0x227b750 00:16:41.130 [2024-11-19 10:10:54.939358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.130 [2024-11-19 10:10:54.939365] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfa40) on tqpair=0x227b750 00:16:41.130 [2024-11-19 10:10:54.939371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.130 [2024-11-19 10:10:54.939378] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfbc0) on tqpair=0x227b750 00:16:41.130 [2024-11-19 10:10:54.939385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.130 [2024-11-19 10:10:54.939398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.130 [2024-11-19 10:10:54.939404] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.130 [2024-11-19 10:10:54.939409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x227b750) 00:16:41.130 [2024-11-19 10:10:54.939420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.130 [2024-11-19 10:10:54.939452] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfbc0, cid 3, qid 0 00:16:41.130 [2024-11-19 10:10:54.939825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.130 [2024-11-19 10:10:54.939845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.130 [2024-11-19 10:10:54.939851] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.130 [2024-11-19 10:10:54.939857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfbc0) on tqpair=0x227b750 00:16:41.130 [2024-11-19 10:10:54.939868] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.130 [2024-11-19 10:10:54.939874] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.130 [2024-11-19 10:10:54.939879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x227b750) 00:16:41.130 [2024-11-19 10:10:54.939889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.130 [2024-11-19 10:10:54.939934] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfbc0, cid 3, qid 0 00:16:41.130 [2024-11-19 10:10:54.940377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.130 [2024-11-19 10:10:54.940409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.130 [2024-11-19 10:10:54.940421] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.130 [2024-11-19 10:10:54.940429] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfbc0) on tqpair=0x227b750 00:16:41.130 [2024-11-19 10:10:54.940440] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:16:41.130 [2024-11-19 10:10:54.940450] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:16:41.130 [2024-11-19 10:10:54.940473] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.130 [2024-11-19 10:10:54.940485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.130 [2024-11-19 10:10:54.940492] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x227b750) 00:16:41.130 [2024-11-19 10:10:54.940508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.130 [2024-11-19 10:10:54.940550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfbc0, cid 3, qid 0 00:16:41.130 [2024-11-19 10:10:54.940895] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.130 [2024-11-19 10:10:54.940937] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.130 [2024-11-19 10:10:54.940944] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.130 [2024-11-19 10:10:54.940950] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfbc0) on tqpair=0x227b750 00:16:41.130 [2024-11-19 10:10:54.940968] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.130 [2024-11-19 10:10:54.940974] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.130 [2024-11-19 10:10:54.940979] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x227b750) 00:16:41.130 [2024-11-19 10:10:54.940991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.130 [2024-11-19 10:10:54.941023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfbc0, cid 3, qid 0 00:16:41.130 [2024-11-19 10:10:54.941165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.130 [2024-11-19 10:10:54.941178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.130 [2024-11-19 10:10:54.941183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.130 [2024-11-19 10:10:54.941189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfbc0) on tqpair=0x227b750 00:16:41.130 [2024-11-19 10:10:54.941204] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.130 [2024-11-19 10:10:54.941211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.130 [2024-11-19 10:10:54.941215] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x227b750) 00:16:41.130 [2024-11-19 10:10:54.941227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.130 [2024-11-19 10:10:54.941269] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfbc0, cid 3, qid 0 00:16:41.130 [2024-11-19 10:10:54.941696] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.130 [2024-11-19 10:10:54.941715] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.130 [2024-11-19 10:10:54.941721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.130 [2024-11-19 10:10:54.941727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfbc0) on tqpair=0x227b750 00:16:41.130 [2024-11-19 10:10:54.941742] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.130 [2024-11-19 10:10:54.941748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.130 [2024-11-19 10:10:54.941753] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x227b750) 00:16:41.130 [2024-11-19 10:10:54.941763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.130 [2024-11-19 10:10:54.941787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfbc0, cid 3, qid 0 00:16:41.130 [2024-11-19 10:10:54.941841] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.130 [2024-11-19 10:10:54.941858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.130 [2024-11-19 10:10:54.941863] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.130 [2024-11-19 10:10:54.941868] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfbc0) on tqpair=0x227b750 00:16:41.130 [2024-11-19 10:10:54.941882] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.130 [2024-11-19 10:10:54.941888] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.130 [2024-11-19 10:10:54.941892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x227b750) 00:16:41.130 [2024-11-19 10:10:54.941902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.130 [2024-11-19 10:10:54.941944] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfbc0, cid 3, qid 0 00:16:41.130 [2024-11-19 10:10:54.942223] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.130 [2024-11-19 10:10:54.942246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.130 [2024-11-19 10:10:54.942255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.131 [2024-11-19 10:10:54.942263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfbc0) on tqpair=0x227b750 00:16:41.131 [2024-11-19 10:10:54.942284] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.131 [2024-11-19 10:10:54.942293] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.131 [2024-11-19 10:10:54.942300] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x227b750) 00:16:41.131 [2024-11-19 10:10:54.942314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.131 [2024-11-19 10:10:54.942350] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfbc0, cid 3, qid 0 00:16:41.131 [2024-11-19 10:10:54.942704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.131 [2024-11-19 10:10:54.942722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.131 [2024-11-19 10:10:54.942728] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.131 [2024-11-19 10:10:54.942733] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfbc0) on tqpair=0x227b750 00:16:41.131 [2024-11-19 10:10:54.942749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.131 [2024-11-19 10:10:54.942755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.131 [2024-11-19 10:10:54.942760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x227b750) 00:16:41.131 [2024-11-19 10:10:54.942769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.131 [2024-11-19 10:10:54.942793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfbc0, cid 3, qid 0 00:16:41.131 [2024-11-19 10:10:54.946958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.131 [2024-11-19 10:10:54.946988] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.131 [2024-11-19 10:10:54.946995] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.131 [2024-11-19 10:10:54.947002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfbc0) on tqpair=0x227b750 00:16:41.131 [2024-11-19 10:10:54.947022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:41.131 [2024-11-19 10:10:54.947029] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:41.131 [2024-11-19 10:10:54.947034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x227b750) 00:16:41.131 [2024-11-19 10:10:54.947057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.131 [2024-11-19 10:10:54.947094] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22dfbc0, cid 3, qid 0 00:16:41.131 [2024-11-19 10:10:54.947150] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:41.131 [2024-11-19 10:10:54.947159] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:41.131 [2024-11-19 10:10:54.947164] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:41.131 [2024-11-19 10:10:54.947169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22dfbc0) on tqpair=0x227b750 00:16:41.131 [2024-11-19 10:10:54.947180] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:16:41.131 0% 00:16:41.131 Data Units Read: 0 00:16:41.131 Data Units Written: 0 00:16:41.131 Host Read Commands: 0 00:16:41.131 Host Write Commands: 0 00:16:41.131 Controller Busy Time: 0 minutes 00:16:41.131 Power Cycles: 0 00:16:41.131 Power On Hours: 0 hours 00:16:41.131 Unsafe Shutdowns: 0 00:16:41.131 Unrecoverable Media Errors: 0 00:16:41.131 Lifetime Error Log Entries: 0 00:16:41.131 Warning Temperature Time: 0 minutes 00:16:41.131 Critical Temperature Time: 0 minutes 00:16:41.131 00:16:41.131 Number of Queues 00:16:41.131 ================ 00:16:41.131 Number of I/O Submission Queues: 127 00:16:41.131 Number of I/O Completion Queues: 127 00:16:41.131 00:16:41.131 Active Namespaces 00:16:41.131 ================= 00:16:41.131 Namespace ID:1 00:16:41.131 Error Recovery Timeout: Unlimited 00:16:41.131 Command Set Identifier: NVM (00h) 00:16:41.131 Deallocate: Supported 00:16:41.131 Deallocated/Unwritten Error: Not Supported 00:16:41.131 Deallocated Read Value: Unknown 00:16:41.131 Deallocate in Write Zeroes: Not Supported 00:16:41.131 Deallocated Guard Field: 0xFFFF 00:16:41.131 Flush: Supported 00:16:41.131 Reservation: Supported 00:16:41.131 Namespace Sharing Capabilities: Multiple Controllers 00:16:41.131 Size (in LBAs): 131072 (0GiB) 00:16:41.131 Capacity (in LBAs): 131072 (0GiB) 00:16:41.131 Utilization (in LBAs): 131072 (0GiB) 00:16:41.131 NGUID: ABCDEF0123456789ABCDEF0123456789 00:16:41.131 EUI64: ABCDEF0123456789 00:16:41.131 UUID: 4063b534-7fb5-4c81-a877-708531069481 00:16:41.131 Thin Provisioning: Not Supported 00:16:41.131 Per-NS Atomic Units: Yes 00:16:41.131 Atomic Boundary Size (Normal): 0 00:16:41.131 Atomic Boundary Size (PFail): 0 00:16:41.131 Atomic Boundary Offset: 0 00:16:41.131 Maximum Single Source Range Length: 65535 00:16:41.131 Maximum Copy Length: 65535 00:16:41.131 Maximum Source Range Count: 1 00:16:41.131 NGUID/EUI64 Never Reused: No 00:16:41.131 Namespace Write Protected: No 00:16:41.131 Number of LBA Formats: 1 00:16:41.131 Current LBA Format: LBA Format #00 00:16:41.131 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:41.131 00:16:41.131 10:10:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:41.390 rmmod nvme_tcp 00:16:41.390 rmmod nvme_fabrics 00:16:41.390 rmmod nvme_keyring 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74157 ']' 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74157 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 74157 ']' 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 74157 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74157 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:41.390 killing process with pid 74157 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74157' 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 74157 00:16:41.390 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 74157 00:16:41.650 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:41.650 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:41.650 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:41.650 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:16:41.650 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:16:41.650 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:41.650 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:16:41.650 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:41.650 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:41.650 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:41.650 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:41.650 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:41.650 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:41.650 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:41.650 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:41.650 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:41.650 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:41.650 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:41.650 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:41.650 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:41.650 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:41.909 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:41.909 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:41.909 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.909 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:41.910 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.910 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:16:41.910 00:16:41.910 real 0m2.268s 00:16:41.910 user 0m4.739s 00:16:41.910 sys 0m0.727s 00:16:41.910 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.910 ************************************ 00:16:41.910 END TEST nvmf_identify 00:16:41.910 10:10:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:41.910 ************************************ 00:16:41.910 10:10:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:41.910 10:10:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:41.910 10:10:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.910 10:10:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.910 ************************************ 00:16:41.910 START TEST nvmf_perf 00:16:41.910 ************************************ 00:16:41.910 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:41.910 * Looking for test storage... 00:16:41.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:41.910 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:41.910 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:41.910 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:42.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.170 --rc genhtml_branch_coverage=1 00:16:42.170 --rc genhtml_function_coverage=1 00:16:42.170 --rc genhtml_legend=1 00:16:42.170 --rc geninfo_all_blocks=1 00:16:42.170 --rc geninfo_unexecuted_blocks=1 00:16:42.170 00:16:42.170 ' 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:42.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.170 --rc genhtml_branch_coverage=1 00:16:42.170 --rc genhtml_function_coverage=1 00:16:42.170 --rc genhtml_legend=1 00:16:42.170 --rc geninfo_all_blocks=1 00:16:42.170 --rc geninfo_unexecuted_blocks=1 00:16:42.170 00:16:42.170 ' 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:42.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.170 --rc genhtml_branch_coverage=1 00:16:42.170 --rc genhtml_function_coverage=1 00:16:42.170 --rc genhtml_legend=1 00:16:42.170 --rc geninfo_all_blocks=1 00:16:42.170 --rc geninfo_unexecuted_blocks=1 00:16:42.170 00:16:42.170 ' 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:42.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.170 --rc genhtml_branch_coverage=1 00:16:42.170 --rc genhtml_function_coverage=1 00:16:42.170 --rc genhtml_legend=1 00:16:42.170 --rc geninfo_all_blocks=1 00:16:42.170 --rc geninfo_unexecuted_blocks=1 00:16:42.170 00:16:42.170 ' 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:42.170 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:42.170 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:42.171 Cannot find device "nvmf_init_br" 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:42.171 Cannot find device "nvmf_init_br2" 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:42.171 Cannot find device "nvmf_tgt_br" 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:42.171 Cannot find device "nvmf_tgt_br2" 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:42.171 Cannot find device "nvmf_init_br" 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:42.171 Cannot find device "nvmf_init_br2" 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:42.171 Cannot find device "nvmf_tgt_br" 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:42.171 Cannot find device "nvmf_tgt_br2" 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:42.171 Cannot find device "nvmf_br" 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:16:42.171 10:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:42.171 Cannot find device "nvmf_init_if" 00:16:42.171 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:16:42.171 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:42.171 Cannot find device "nvmf_init_if2" 00:16:42.171 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:16:42.171 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:42.171 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:42.171 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:16:42.171 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:42.171 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:42.171 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:16:42.171 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:42.171 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:42.171 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:42.171 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:42.430 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:42.430 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:42.430 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:42.430 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:42.430 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:42.430 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:42.430 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:42.430 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:42.430 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:42.430 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:42.430 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:42.431 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:42.431 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:16:42.431 00:16:42.431 --- 10.0.0.3 ping statistics --- 00:16:42.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.431 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:42.431 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:42.431 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:16:42.431 00:16:42.431 --- 10.0.0.4 ping statistics --- 00:16:42.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.431 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:42.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:42.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:16:42.431 00:16:42.431 --- 10.0.0.1 ping statistics --- 00:16:42.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.431 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:42.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:42.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:16:42.431 00:16:42.431 --- 10.0.0.2 ping statistics --- 00:16:42.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.431 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74405 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74405 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74405 ']' 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:42.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:42.431 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:42.690 [2024-11-19 10:10:56.354496] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:16:42.690 [2024-11-19 10:10:56.354853] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.690 [2024-11-19 10:10:56.513304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:42.949 [2024-11-19 10:10:56.586698] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:42.949 [2024-11-19 10:10:56.586759] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:42.949 [2024-11-19 10:10:56.586774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:42.949 [2024-11-19 10:10:56.586786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:42.949 [2024-11-19 10:10:56.586806] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:42.949 [2024-11-19 10:10:56.588130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.949 [2024-11-19 10:10:56.588244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:42.949 [2024-11-19 10:10:56.588300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:42.949 [2024-11-19 10:10:56.588306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.949 [2024-11-19 10:10:56.647024] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:42.949 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:42.949 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:16:42.949 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:42.949 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:42.949 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:42.949 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:42.949 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:42.949 10:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:16:43.517 10:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:16:43.517 10:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:16:43.776 10:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:16:43.776 10:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:44.034 10:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:16:44.034 10:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:16:44.034 10:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:16:44.035 10:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:16:44.035 10:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:44.293 [2024-11-19 10:10:58.139627] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.293 10:10:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:44.859 10:10:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:44.859 10:10:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:44.859 10:10:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:44.859 10:10:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:16:45.426 10:10:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:45.686 [2024-11-19 10:10:59.345508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:45.686 10:10:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:45.943 10:10:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:16:45.944 10:10:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:45.944 10:10:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:16:45.944 10:10:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:47.319 Initializing NVMe Controllers 00:16:47.319 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:47.319 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:47.319 Initialization complete. Launching workers. 00:16:47.319 ======================================================== 00:16:47.319 Latency(us) 00:16:47.319 Device Information : IOPS MiB/s Average min max 00:16:47.319 PCIE (0000:00:10.0) NSID 1 from core 0: 23935.35 93.50 1337.07 362.63 6057.41 00:16:47.319 ======================================================== 00:16:47.319 Total : 23935.35 93.50 1337.07 362.63 6057.41 00:16:47.319 00:16:47.319 10:11:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:48.296 Initializing NVMe Controllers 00:16:48.296 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:48.296 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:48.296 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:48.296 Initialization complete. Launching workers. 00:16:48.296 ======================================================== 00:16:48.296 Latency(us) 00:16:48.296 Device Information : IOPS MiB/s Average min max 00:16:48.296 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3481.00 13.60 286.83 108.06 7194.91 00:16:48.296 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8169.72 5028.44 15024.36 00:16:48.296 ======================================================== 00:16:48.296 Total : 3604.00 14.08 555.86 108.06 15024.36 00:16:48.296 00:16:48.554 10:11:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:49.931 Initializing NVMe Controllers 00:16:49.931 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:49.931 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:49.931 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:49.931 Initialization complete. Launching workers. 00:16:49.931 ======================================================== 00:16:49.931 Latency(us) 00:16:49.931 Device Information : IOPS MiB/s Average min max 00:16:49.931 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8748.00 34.17 3659.25 589.85 8987.80 00:16:49.931 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3931.00 15.36 8185.87 6493.78 17306.64 00:16:49.931 ======================================================== 00:16:49.931 Total : 12679.00 49.53 5062.68 589.85 17306.64 00:16:49.931 00:16:49.931 10:11:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:16:49.931 10:11:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:52.463 Initializing NVMe Controllers 00:16:52.463 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:52.463 Controller IO queue size 128, less than required. 00:16:52.463 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:52.463 Controller IO queue size 128, less than required. 00:16:52.463 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:52.463 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:52.463 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:52.463 Initialization complete. Launching workers. 00:16:52.463 ======================================================== 00:16:52.463 Latency(us) 00:16:52.463 Device Information : IOPS MiB/s Average min max 00:16:52.463 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1645.48 411.37 79134.82 42094.58 124009.43 00:16:52.463 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 640.63 160.16 210296.93 70800.16 336772.16 00:16:52.463 ======================================================== 00:16:52.463 Total : 2286.10 571.53 115889.98 42094.58 336772.16 00:16:52.463 00:16:52.463 10:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:16:52.722 Initializing NVMe Controllers 00:16:52.722 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:52.722 Controller IO queue size 128, less than required. 00:16:52.722 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:52.722 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:16:52.722 Controller IO queue size 128, less than required. 00:16:52.722 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:52.722 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:16:52.722 WARNING: Some requested NVMe devices were skipped 00:16:52.722 No valid NVMe controllers or AIO or URING devices found 00:16:52.722 10:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:16:55.254 Initializing NVMe Controllers 00:16:55.254 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:55.254 Controller IO queue size 128, less than required. 00:16:55.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:55.254 Controller IO queue size 128, less than required. 00:16:55.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:55.254 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:55.254 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:55.254 Initialization complete. Launching workers. 00:16:55.254 00:16:55.254 ==================== 00:16:55.254 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:16:55.254 TCP transport: 00:16:55.254 polls: 8949 00:16:55.254 idle_polls: 5117 00:16:55.254 sock_completions: 3832 00:16:55.254 nvme_completions: 6503 00:16:55.254 submitted_requests: 9974 00:16:55.254 queued_requests: 1 00:16:55.254 00:16:55.254 ==================== 00:16:55.254 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:16:55.254 TCP transport: 00:16:55.254 polls: 11096 00:16:55.254 idle_polls: 7160 00:16:55.254 sock_completions: 3936 00:16:55.254 nvme_completions: 6537 00:16:55.254 submitted_requests: 9794 00:16:55.254 queued_requests: 1 00:16:55.254 ======================================================== 00:16:55.254 Latency(us) 00:16:55.255 Device Information : IOPS MiB/s Average min max 00:16:55.255 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1620.90 405.22 80564.13 42802.35 124066.08 00:16:55.255 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1629.37 407.34 79912.36 35777.00 125000.79 00:16:55.255 ======================================================== 00:16:55.255 Total : 3250.27 812.57 80237.40 35777.00 125000.79 00:16:55.255 00:16:55.255 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:16:55.255 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:55.821 rmmod nvme_tcp 00:16:55.821 rmmod nvme_fabrics 00:16:55.821 rmmod nvme_keyring 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74405 ']' 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74405 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74405 ']' 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74405 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74405 00:16:55.821 killing process with pid 74405 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74405' 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74405 00:16:55.821 10:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74405 00:16:56.440 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:56.440 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:56.440 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:56.440 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:16:56.440 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:16:56.440 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:56.440 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:16:56.440 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:56.440 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:56.440 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:56.440 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:56.440 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:56.440 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:56.723 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:56.723 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:56.723 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:56.723 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:56.724 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:56.724 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:56.724 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:56.724 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:56.724 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:56.724 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:56.724 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.724 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.724 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.724 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:16:56.724 00:16:56.724 real 0m14.791s 00:16:56.724 user 0m53.736s 00:16:56.724 sys 0m4.226s 00:16:56.724 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.724 10:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:56.724 ************************************ 00:16:56.724 END TEST nvmf_perf 00:16:56.724 ************************************ 00:16:56.724 10:11:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:56.724 10:11:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:56.724 10:11:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.724 10:11:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.724 ************************************ 00:16:56.724 START TEST nvmf_fio_host 00:16:56.724 ************************************ 00:16:56.724 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:56.724 * Looking for test storage... 00:16:56.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:56.724 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:56.724 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:16:56.724 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:56.985 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:56.985 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:56.985 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:56.985 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:56.985 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:56.985 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:56.985 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:56.985 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:56.985 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:56.985 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:56.985 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:56.985 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:56.985 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:16:56.985 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:16:56.985 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:56.985 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:56.985 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:16:56.985 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:16:56.985 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:56.985 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:56.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.986 --rc genhtml_branch_coverage=1 00:16:56.986 --rc genhtml_function_coverage=1 00:16:56.986 --rc genhtml_legend=1 00:16:56.986 --rc geninfo_all_blocks=1 00:16:56.986 --rc geninfo_unexecuted_blocks=1 00:16:56.986 00:16:56.986 ' 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:56.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.986 --rc genhtml_branch_coverage=1 00:16:56.986 --rc genhtml_function_coverage=1 00:16:56.986 --rc genhtml_legend=1 00:16:56.986 --rc geninfo_all_blocks=1 00:16:56.986 --rc geninfo_unexecuted_blocks=1 00:16:56.986 00:16:56.986 ' 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:56.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.986 --rc genhtml_branch_coverage=1 00:16:56.986 --rc genhtml_function_coverage=1 00:16:56.986 --rc genhtml_legend=1 00:16:56.986 --rc geninfo_all_blocks=1 00:16:56.986 --rc geninfo_unexecuted_blocks=1 00:16:56.986 00:16:56.986 ' 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:56.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.986 --rc genhtml_branch_coverage=1 00:16:56.986 --rc genhtml_function_coverage=1 00:16:56.986 --rc genhtml_legend=1 00:16:56.986 --rc geninfo_all_blocks=1 00:16:56.986 --rc geninfo_unexecuted_blocks=1 00:16:56.986 00:16:56.986 ' 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:56.986 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:56.986 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:56.987 Cannot find device "nvmf_init_br" 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:56.987 Cannot find device "nvmf_init_br2" 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:56.987 Cannot find device "nvmf_tgt_br" 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:56.987 Cannot find device "nvmf_tgt_br2" 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:56.987 Cannot find device "nvmf_init_br" 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:56.987 Cannot find device "nvmf_init_br2" 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:56.987 Cannot find device "nvmf_tgt_br" 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:56.987 Cannot find device "nvmf_tgt_br2" 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:56.987 Cannot find device "nvmf_br" 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:56.987 Cannot find device "nvmf_init_if" 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:56.987 Cannot find device "nvmf_init_if2" 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:56.987 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:56.987 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:16:56.987 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:57.247 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:57.247 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:57.247 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:57.247 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:57.247 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:57.247 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:57.247 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:57.247 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:57.247 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:57.247 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:57.247 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:57.247 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:57.247 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:57.247 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:57.247 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:57.247 10:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:57.247 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:57.247 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:16:57.247 00:16:57.247 --- 10.0.0.3 ping statistics --- 00:16:57.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.247 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:57.247 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:57.247 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:16:57.247 00:16:57.247 --- 10.0.0.4 ping statistics --- 00:16:57.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.247 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:57.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:57.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:57.247 00:16:57.247 --- 10.0.0.1 ping statistics --- 00:16:57.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.247 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:57.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:57.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:16:57.247 00:16:57.247 --- 10.0.0.2 ping statistics --- 00:16:57.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.247 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:57.247 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.506 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74864 00:16:57.506 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:57.506 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74864 00:16:57.506 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 74864 ']' 00:16:57.506 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.506 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.506 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.506 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.506 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.506 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:57.506 [2024-11-19 10:11:11.192564] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:16:57.506 [2024-11-19 10:11:11.192663] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.506 [2024-11-19 10:11:11.346675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:57.765 [2024-11-19 10:11:11.416084] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.765 [2024-11-19 10:11:11.416155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.765 [2024-11-19 10:11:11.416169] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.765 [2024-11-19 10:11:11.416180] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.765 [2024-11-19 10:11:11.416189] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.765 [2024-11-19 10:11:11.417509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.765 [2024-11-19 10:11:11.417614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.765 [2024-11-19 10:11:11.417747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:57.765 [2024-11-19 10:11:11.417752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.765 [2024-11-19 10:11:11.477587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:57.765 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.765 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:16:57.765 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:58.024 [2024-11-19 10:11:11.827781] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.024 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:16:58.024 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:58.024 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.024 10:11:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:58.283 Malloc1 00:16:58.283 10:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:58.541 10:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:59.108 10:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:59.108 [2024-11-19 10:11:12.949583] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:59.108 10:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:59.366 10:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:16:59.366 10:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:16:59.366 10:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:16:59.366 10:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:59.366 10:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:59.366 10:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:59.366 10:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:59.366 10:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:16:59.366 10:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:59.366 10:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:59.366 10:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:59.366 10:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:16:59.366 10:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:59.366 10:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:59.366 10:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:59.366 10:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:59.366 10:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:59.366 10:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:16:59.366 10:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:59.625 10:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:59.625 10:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:59.625 10:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:59.625 10:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:16:59.625 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:59.625 fio-3.35 00:16:59.625 Starting 1 thread 00:17:02.151 00:17:02.151 test: (groupid=0, jobs=1): err= 0: pid=74940: Tue Nov 19 10:11:15 2024 00:17:02.151 read: IOPS=8399, BW=32.8MiB/s (34.4MB/s)(65.8MiB/2007msec) 00:17:02.151 slat (usec): min=2, max=318, avg= 2.41, stdev= 3.13 00:17:02.151 clat (usec): min=2540, max=13524, avg=7941.67, stdev=583.85 00:17:02.151 lat (usec): min=2587, max=13526, avg=7944.08, stdev=583.52 00:17:02.151 clat percentiles (usec): 00:17:02.151 | 1.00th=[ 6783], 5.00th=[ 7177], 10.00th=[ 7373], 20.00th=[ 7570], 00:17:02.151 | 30.00th=[ 7701], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8029], 00:17:02.151 | 70.00th=[ 8160], 80.00th=[ 8291], 90.00th=[ 8586], 95.00th=[ 8717], 00:17:02.151 | 99.00th=[ 9634], 99.50th=[10683], 99.90th=[12911], 99.95th=[13173], 00:17:02.151 | 99.99th=[13435] 00:17:02.151 bw ( KiB/s): min=32864, max=34152, per=99.94%, avg=33576.00, stdev=567.76, samples=4 00:17:02.151 iops : min= 8216, max= 8538, avg=8394.00, stdev=141.94, samples=4 00:17:02.151 write: IOPS=8392, BW=32.8MiB/s (34.4MB/s)(65.8MiB/2007msec); 0 zone resets 00:17:02.151 slat (usec): min=2, max=254, avg= 2.48, stdev= 2.16 00:17:02.151 clat (usec): min=2403, max=13238, avg=7229.52, stdev=533.40 00:17:02.151 lat (usec): min=2417, max=13240, avg=7232.00, stdev=533.18 00:17:02.151 clat percentiles (usec): 00:17:02.151 | 1.00th=[ 6194], 5.00th=[ 6521], 10.00th=[ 6718], 20.00th=[ 6849], 00:17:02.151 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7308], 00:17:02.151 | 70.00th=[ 7439], 80.00th=[ 7570], 90.00th=[ 7767], 95.00th=[ 7963], 00:17:02.151 | 99.00th=[ 8717], 99.50th=[ 9503], 99.90th=[11469], 99.95th=[12518], 00:17:02.151 | 99.99th=[13173] 00:17:02.151 bw ( KiB/s): min=33448, max=33800, per=100.00%, avg=33570.00, stdev=162.30, samples=4 00:17:02.151 iops : min= 8362, max= 8450, avg=8392.50, stdev=40.58, samples=4 00:17:02.151 lat (msec) : 4=0.11%, 10=99.31%, 20=0.59% 00:17:02.151 cpu : usr=72.93%, sys=20.89%, ctx=10, majf=0, minf=7 00:17:02.151 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:02.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:02.151 issued rwts: total=16857,16843,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:02.151 00:17:02.151 Run status group 0 (all jobs): 00:17:02.151 READ: bw=32.8MiB/s (34.4MB/s), 32.8MiB/s-32.8MiB/s (34.4MB/s-34.4MB/s), io=65.8MiB (69.0MB), run=2007-2007msec 00:17:02.151 WRITE: bw=32.8MiB/s (34.4MB/s), 32.8MiB/s-32.8MiB/s (34.4MB/s-34.4MB/s), io=65.8MiB (69.0MB), run=2007-2007msec 00:17:02.151 10:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:02.151 10:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:02.151 10:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:02.151 10:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:02.151 10:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:02.151 10:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:02.151 10:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:02.151 10:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:02.151 10:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:02.151 10:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:02.151 10:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:02.151 10:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:02.151 10:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:02.151 10:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:02.151 10:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:02.151 10:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:02.151 10:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:02.151 10:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:02.151 10:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:02.151 10:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:02.151 10:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:02.152 10:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:02.152 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:02.152 fio-3.35 00:17:02.152 Starting 1 thread 00:17:04.680 00:17:04.680 test: (groupid=0, jobs=1): err= 0: pid=74983: Tue Nov 19 10:11:18 2024 00:17:04.680 read: IOPS=8161, BW=128MiB/s (134MB/s)(256MiB/2008msec) 00:17:04.680 slat (usec): min=3, max=115, avg= 3.82, stdev= 1.69 00:17:04.680 clat (usec): min=2089, max=18867, avg=8730.10, stdev=2583.29 00:17:04.680 lat (usec): min=2092, max=18871, avg=8733.92, stdev=2583.37 00:17:04.680 clat percentiles (usec): 00:17:04.680 | 1.00th=[ 4178], 5.00th=[ 4948], 10.00th=[ 5473], 20.00th=[ 6456], 00:17:04.680 | 30.00th=[ 7177], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 9241], 00:17:04.680 | 70.00th=[10159], 80.00th=[10814], 90.00th=[12256], 95.00th=[13435], 00:17:04.680 | 99.00th=[15664], 99.50th=[15926], 99.90th=[16450], 99.95th=[16909], 00:17:04.680 | 99.99th=[17695] 00:17:04.680 bw ( KiB/s): min=59968, max=78944, per=52.00%, avg=67912.00, stdev=8253.82, samples=4 00:17:04.680 iops : min= 3748, max= 4934, avg=4244.50, stdev=515.86, samples=4 00:17:04.680 write: IOPS=4869, BW=76.1MiB/s (79.8MB/s)(138MiB/1820msec); 0 zone resets 00:17:04.680 slat (usec): min=35, max=312, avg=38.87, stdev= 6.28 00:17:04.680 clat (usec): min=4367, max=20664, avg=12084.36, stdev=2302.33 00:17:04.680 lat (usec): min=4403, max=20700, avg=12123.23, stdev=2303.40 00:17:04.680 clat percentiles (usec): 00:17:04.680 | 1.00th=[ 7570], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10159], 00:17:04.680 | 30.00th=[10683], 40.00th=[11338], 50.00th=[11863], 60.00th=[12387], 00:17:04.680 | 70.00th=[13042], 80.00th=[14091], 90.00th=[15401], 95.00th=[16319], 00:17:04.680 | 99.00th=[17695], 99.50th=[17957], 99.90th=[19792], 99.95th=[20317], 00:17:04.680 | 99.99th=[20579] 00:17:04.680 bw ( KiB/s): min=62368, max=81216, per=90.40%, avg=70432.00, stdev=8385.34, samples=4 00:17:04.680 iops : min= 3898, max= 5076, avg=4402.00, stdev=524.08, samples=4 00:17:04.680 lat (msec) : 4=0.42%, 10=50.55%, 20=49.00%, 50=0.02% 00:17:04.680 cpu : usr=83.01%, sys=12.76%, ctx=9, majf=0, minf=14 00:17:04.680 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:17:04.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:04.680 issued rwts: total=16389,8862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:04.680 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:04.680 00:17:04.680 Run status group 0 (all jobs): 00:17:04.680 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=256MiB (269MB), run=2008-2008msec 00:17:04.680 WRITE: bw=76.1MiB/s (79.8MB/s), 76.1MiB/s-76.1MiB/s (79.8MB/s-79.8MB/s), io=138MiB (145MB), run=1820-1820msec 00:17:04.680 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:04.937 rmmod nvme_tcp 00:17:04.937 rmmod nvme_fabrics 00:17:04.937 rmmod nvme_keyring 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74864 ']' 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74864 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 74864 ']' 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 74864 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74864 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:04.937 killing process with pid 74864 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74864' 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 74864 00:17:04.937 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 74864 00:17:05.194 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:05.194 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:05.194 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:05.194 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:17:05.194 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:17:05.194 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:05.194 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:17:05.194 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:05.194 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:05.194 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:05.194 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:05.194 10:11:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:05.194 10:11:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:05.194 10:11:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:05.194 10:11:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:05.194 10:11:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:05.194 10:11:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:05.194 10:11:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:05.452 10:11:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:05.452 10:11:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:05.452 10:11:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:05.452 10:11:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:05.452 10:11:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:05.452 10:11:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.452 10:11:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.452 10:11:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.452 10:11:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:17:05.452 00:17:05.452 real 0m8.705s 00:17:05.452 user 0m34.486s 00:17:05.452 sys 0m2.398s 00:17:05.452 10:11:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.452 ************************************ 00:17:05.452 END TEST nvmf_fio_host 00:17:05.452 10:11:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.452 ************************************ 00:17:05.452 10:11:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:05.452 10:11:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:05.452 10:11:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.452 10:11:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.452 ************************************ 00:17:05.452 START TEST nvmf_failover 00:17:05.452 ************************************ 00:17:05.452 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:05.711 * Looking for test storage... 00:17:05.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:05.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.711 --rc genhtml_branch_coverage=1 00:17:05.711 --rc genhtml_function_coverage=1 00:17:05.711 --rc genhtml_legend=1 00:17:05.711 --rc geninfo_all_blocks=1 00:17:05.711 --rc geninfo_unexecuted_blocks=1 00:17:05.711 00:17:05.711 ' 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:05.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.711 --rc genhtml_branch_coverage=1 00:17:05.711 --rc genhtml_function_coverage=1 00:17:05.711 --rc genhtml_legend=1 00:17:05.711 --rc geninfo_all_blocks=1 00:17:05.711 --rc geninfo_unexecuted_blocks=1 00:17:05.711 00:17:05.711 ' 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:05.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.711 --rc genhtml_branch_coverage=1 00:17:05.711 --rc genhtml_function_coverage=1 00:17:05.711 --rc genhtml_legend=1 00:17:05.711 --rc geninfo_all_blocks=1 00:17:05.711 --rc geninfo_unexecuted_blocks=1 00:17:05.711 00:17:05.711 ' 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:05.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.711 --rc genhtml_branch_coverage=1 00:17:05.711 --rc genhtml_function_coverage=1 00:17:05.711 --rc genhtml_legend=1 00:17:05.711 --rc geninfo_all_blocks=1 00:17:05.711 --rc geninfo_unexecuted_blocks=1 00:17:05.711 00:17:05.711 ' 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:05.711 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:05.712 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:05.712 Cannot find device "nvmf_init_br" 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:05.712 Cannot find device "nvmf_init_br2" 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:05.712 Cannot find device "nvmf_tgt_br" 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:05.712 Cannot find device "nvmf_tgt_br2" 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:05.712 Cannot find device "nvmf_init_br" 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:05.712 Cannot find device "nvmf_init_br2" 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:05.712 Cannot find device "nvmf_tgt_br" 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:05.712 Cannot find device "nvmf_tgt_br2" 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:05.712 Cannot find device "nvmf_br" 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:17:05.712 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:05.969 Cannot find device "nvmf_init_if" 00:17:05.969 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:17:05.969 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:05.969 Cannot find device "nvmf_init_if2" 00:17:05.969 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:05.970 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:05.970 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:05.970 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:05.970 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:17:05.970 00:17:05.970 --- 10.0.0.3 ping statistics --- 00:17:05.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.970 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:05.970 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:05.970 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:17:05.970 00:17:05.970 --- 10.0.0.4 ping statistics --- 00:17:05.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.970 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:05.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:17:05.970 00:17:05.970 --- 10.0.0.1 ping statistics --- 00:17:05.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.970 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:17:05.970 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:06.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:17:06.228 00:17:06.228 --- 10.0.0.2 ping statistics --- 00:17:06.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.228 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:17:06.228 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.228 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:17:06.228 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:06.228 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.228 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:06.228 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:06.228 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.228 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:06.228 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:06.228 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:17:06.228 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:06.228 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:06.228 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:06.228 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:06.228 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75253 00:17:06.228 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75253 00:17:06.228 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75253 ']' 00:17:06.228 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.228 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.228 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.228 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.228 10:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:06.229 [2024-11-19 10:11:19.965559] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:17:06.229 [2024-11-19 10:11:19.965671] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.486 [2024-11-19 10:11:20.118639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:06.486 [2024-11-19 10:11:20.187478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.486 [2024-11-19 10:11:20.187552] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.486 [2024-11-19 10:11:20.187567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.486 [2024-11-19 10:11:20.187579] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.486 [2024-11-19 10:11:20.187590] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.486 [2024-11-19 10:11:20.189066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.486 [2024-11-19 10:11:20.189198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:06.486 [2024-11-19 10:11:20.189204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.486 [2024-11-19 10:11:20.243393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:06.486 10:11:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.486 10:11:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:17:06.486 10:11:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:06.486 10:11:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:06.486 10:11:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:06.486 10:11:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.486 10:11:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:06.744 [2024-11-19 10:11:20.620108] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.000 10:11:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:07.259 Malloc0 00:17:07.259 10:11:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:07.517 10:11:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:07.774 10:11:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:08.070 [2024-11-19 10:11:21.804369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:08.070 10:11:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:08.350 [2024-11-19 10:11:22.100534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:08.350 10:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:17:08.609 [2024-11-19 10:11:22.356737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:17:08.609 10:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75304 00:17:08.609 10:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:08.609 10:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:08.609 10:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75304 /var/tmp/bdevperf.sock 00:17:08.609 10:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75304 ']' 00:17:08.609 10:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:08.609 10:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.609 10:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:08.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:08.609 10:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.609 10:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:09.988 10:11:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:09.988 10:11:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:17:09.988 10:11:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:09.988 NVMe0n1 00:17:09.988 10:11:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:10.555 00:17:10.555 10:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75328 00:17:10.555 10:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:10.555 10:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:17:11.491 10:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:11.749 10:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:17:15.035 10:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:15.035 00:17:15.035 10:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:15.293 10:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:17:18.579 10:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:18.579 [2024-11-19 10:11:32.448520] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:18.837 10:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:17:19.773 10:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:17:20.067 10:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75328 00:17:26.657 { 00:17:26.657 "results": [ 00:17:26.657 { 00:17:26.657 "job": "NVMe0n1", 00:17:26.657 "core_mask": "0x1", 00:17:26.657 "workload": "verify", 00:17:26.657 "status": "finished", 00:17:26.657 "verify_range": { 00:17:26.657 "start": 0, 00:17:26.657 "length": 16384 00:17:26.657 }, 00:17:26.657 "queue_depth": 128, 00:17:26.657 "io_size": 4096, 00:17:26.657 "runtime": 15.009759, 00:17:26.657 "iops": 8761.966131501511, 00:17:26.657 "mibps": 34.22643020117778, 00:17:26.657 "io_failed": 3493, 00:17:26.657 "io_timeout": 0, 00:17:26.657 "avg_latency_us": 14198.151386153548, 00:17:26.657 "min_latency_us": 662.8072727272727, 00:17:26.657 "max_latency_us": 28954.996363636365 00:17:26.657 } 00:17:26.657 ], 00:17:26.657 "core_count": 1 00:17:26.657 } 00:17:26.657 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75304 00:17:26.657 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75304 ']' 00:17:26.657 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75304 00:17:26.657 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:17:26.657 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.657 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75304 00:17:26.657 killing process with pid 75304 00:17:26.657 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:26.657 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:26.657 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75304' 00:17:26.657 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75304 00:17:26.657 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75304 00:17:26.657 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:26.657 [2024-11-19 10:11:22.447581] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:17:26.657 [2024-11-19 10:11:22.447703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75304 ] 00:17:26.657 [2024-11-19 10:11:22.608075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.657 [2024-11-19 10:11:22.676200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.657 [2024-11-19 10:11:22.733377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:26.657 Running I/O for 15 seconds... 00:17:26.657 6934.00 IOPS, 27.09 MiB/s [2024-11-19T10:11:40.546Z] [2024-11-19 10:11:25.491954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.657 [2024-11-19 10:11:25.492027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.657 [2024-11-19 10:11:25.492067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.657 [2024-11-19 10:11:25.492086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.657 [2024-11-19 10:11:25.492103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.657 [2024-11-19 10:11:25.492117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.657 [2024-11-19 10:11:25.492134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.657 [2024-11-19 10:11:25.492148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.657 [2024-11-19 10:11:25.492164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.657 [2024-11-19 10:11:25.492179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.657 [2024-11-19 10:11:25.492195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.657 [2024-11-19 10:11:25.492209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.657 [2024-11-19 10:11:25.492225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.657 [2024-11-19 10:11:25.492240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.657 [2024-11-19 10:11:25.492256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.657 [2024-11-19 10:11:25.492271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.657 [2024-11-19 10:11:25.492287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.657 [2024-11-19 10:11:25.492301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.657 [2024-11-19 10:11:25.492317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.657 [2024-11-19 10:11:25.492331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.657 [2024-11-19 10:11:25.492347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.657 [2024-11-19 10:11:25.492400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.657 [2024-11-19 10:11:25.492417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.657 [2024-11-19 10:11:25.492432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.657 [2024-11-19 10:11:25.492449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.657 [2024-11-19 10:11:25.492463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.657 [2024-11-19 10:11:25.492479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.657 [2024-11-19 10:11:25.492493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.657 [2024-11-19 10:11:25.492509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.657 [2024-11-19 10:11:25.492523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.492539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.492554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.492569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.492583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.492599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.492614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.492630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.492644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.492660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.492676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.492693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.492707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.492723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.492737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.492753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.492768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.492792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.492808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.492825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.492839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.492855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.492870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.492886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.492900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.492929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.492947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.492964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.492978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.492994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.493009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.493025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.493039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.493055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.493069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.493086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.493100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.493116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.493130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.493146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.493160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.493177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.493192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.493217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.493233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.493249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.493264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.493280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.493295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.493311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.493327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.493344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.493358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.493374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.493389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.493405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.493419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.493435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.493449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.493465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.493479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.493495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.658 [2024-11-19 10:11:25.493509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.658 [2024-11-19 10:11:25.493525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.659 [2024-11-19 10:11:25.493539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.493555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.659 [2024-11-19 10:11:25.493569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.493585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.659 [2024-11-19 10:11:25.493605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.493622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.659 [2024-11-19 10:11:25.493636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.493652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.659 [2024-11-19 10:11:25.493666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.493683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.659 [2024-11-19 10:11:25.493697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.493713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.659 [2024-11-19 10:11:25.493727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.493743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.659 [2024-11-19 10:11:25.493757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.493773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.659 [2024-11-19 10:11:25.493787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.493803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.493818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.493835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.493849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.493865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.493880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.493896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.493910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.493939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.493954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.493970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.493984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.494012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.494027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.494043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.494057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.494073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.494088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.494104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.494118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.494135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.494149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.494165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.494179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.494196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.494210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.494227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.494241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.494258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.494272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.494288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.659 [2024-11-19 10:11:25.494302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.494318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.659 [2024-11-19 10:11:25.494332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.494348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.494362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.494378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.494398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.494415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.494429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.494446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.494460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.494477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.494491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.494507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.494521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.494537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.494551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.494567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.659 [2024-11-19 10:11:25.494581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.494597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.494612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.494636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.494651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.494667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.494681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.659 [2024-11-19 10:11:25.494698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.659 [2024-11-19 10:11:25.494712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.494728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.494743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.494759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.494774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.494799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.494814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.494830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.494844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.494861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.494875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.494891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.494905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.494933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.494948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.494964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.494979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.494995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.660 [2024-11-19 10:11:25.495817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.660 [2024-11-19 10:11:25.495831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:25.495847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.661 [2024-11-19 10:11:25.495862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:25.495878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.661 [2024-11-19 10:11:25.495892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:25.495907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.661 [2024-11-19 10:11:25.495934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:25.495952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.661 [2024-11-19 10:11:25.495966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:25.495982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.661 [2024-11-19 10:11:25.496004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:25.496028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.661 [2024-11-19 10:11:25.496043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:25.496070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.661 [2024-11-19 10:11:25.496084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:25.496100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.661 [2024-11-19 10:11:25.496114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:25.496130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f30 is same with the state(6) to be set 00:17:26.661 [2024-11-19 10:11:25.496148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.661 [2024-11-19 10:11:25.496159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.661 [2024-11-19 10:11:25.496171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65240 len:8 PRP1 0x0 PRP2 0x0 00:17:26.661 [2024-11-19 10:11:25.496190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:25.496254] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:17:26.661 [2024-11-19 10:11:25.496311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.661 [2024-11-19 10:11:25.496333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:25.496349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.661 [2024-11-19 10:11:25.496364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:25.496379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.661 [2024-11-19 10:11:25.496393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:25.496407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.661 [2024-11-19 10:11:25.496421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:25.496435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:26.661 [2024-11-19 10:11:25.496507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x233f710 (9): Bad file descriptor 00:17:26.661 [2024-11-19 10:11:25.500311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:26.661 [2024-11-19 10:11:25.530755] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:17:26.661 7548.00 IOPS, 29.48 MiB/s [2024-11-19T10:11:40.550Z] 8004.33 IOPS, 31.27 MiB/s [2024-11-19T10:11:40.550Z] 8229.25 IOPS, 32.15 MiB/s [2024-11-19T10:11:40.550Z] [2024-11-19 10:11:29.156696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.661 [2024-11-19 10:11:29.157262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:29.157400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.661 [2024-11-19 10:11:29.157490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:29.157580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.661 [2024-11-19 10:11:29.157648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:29.157713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.661 [2024-11-19 10:11:29.157785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:29.157851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.661 [2024-11-19 10:11:29.157927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:29.158017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.661 [2024-11-19 10:11:29.158105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:29.158173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.661 [2024-11-19 10:11:29.158250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:29.158323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.661 [2024-11-19 10:11:29.158397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:29.158471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.661 [2024-11-19 10:11:29.158545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:29.158610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.661 [2024-11-19 10:11:29.158694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:29.158759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.661 [2024-11-19 10:11:29.158840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:29.158932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.661 [2024-11-19 10:11:29.159020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:29.159088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.661 [2024-11-19 10:11:29.159161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:29.159234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.661 [2024-11-19 10:11:29.159307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.661 [2024-11-19 10:11:29.159379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.661 [2024-11-19 10:11:29.159463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.159536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.662 [2024-11-19 10:11:29.159613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.159686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:71032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.662 [2024-11-19 10:11:29.159764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.159837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.662 [2024-11-19 10:11:29.159927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.160004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.662 [2024-11-19 10:11:29.160107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.160185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.662 [2024-11-19 10:11:29.160262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.160336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.662 [2024-11-19 10:11:29.160401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.160463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.662 [2024-11-19 10:11:29.160542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.160607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.662 [2024-11-19 10:11:29.160679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.160765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.662 [2024-11-19 10:11:29.160836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.160899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.662 [2024-11-19 10:11:29.160975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.161053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.662 [2024-11-19 10:11:29.161127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.161190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.662 [2024-11-19 10:11:29.161274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.161345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.662 [2024-11-19 10:11:29.161419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.161481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.662 [2024-11-19 10:11:29.161552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.161615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.662 [2024-11-19 10:11:29.161690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.161760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.662 [2024-11-19 10:11:29.161822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.161884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.662 [2024-11-19 10:11:29.161961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.162028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.662 [2024-11-19 10:11:29.162105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.162179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.662 [2024-11-19 10:11:29.162241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.162302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.662 [2024-11-19 10:11:29.162362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.162431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.662 [2024-11-19 10:11:29.162504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.162566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.662 [2024-11-19 10:11:29.162643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.162706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.662 [2024-11-19 10:11:29.162781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.162853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.662 [2024-11-19 10:11:29.162938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.163031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.662 [2024-11-19 10:11:29.163095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.163166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.662 [2024-11-19 10:11:29.163242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.163305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.662 [2024-11-19 10:11:29.163381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.163451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.662 [2024-11-19 10:11:29.163522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.163592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.662 [2024-11-19 10:11:29.163671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.163733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.662 [2024-11-19 10:11:29.163794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.163854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.662 [2024-11-19 10:11:29.163956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.164027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.662 [2024-11-19 10:11:29.164132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.662 [2024-11-19 10:11:29.164200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.663 [2024-11-19 10:11:29.164282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.164355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.663 [2024-11-19 10:11:29.164429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.164501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.663 [2024-11-19 10:11:29.164573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.164636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.663 [2024-11-19 10:11:29.164724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.164802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.663 [2024-11-19 10:11:29.164892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.164983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.663 [2024-11-19 10:11:29.165064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.165135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.663 [2024-11-19 10:11:29.165206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.165268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.663 [2024-11-19 10:11:29.165338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.165409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.663 [2024-11-19 10:11:29.165483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.165553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.663 [2024-11-19 10:11:29.165624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.165694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.663 [2024-11-19 10:11:29.165770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.165833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.663 [2024-11-19 10:11:29.165921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.165995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.663 [2024-11-19 10:11:29.166070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.166133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.663 [2024-11-19 10:11:29.166208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.166271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.663 [2024-11-19 10:11:29.166346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.166409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.663 [2024-11-19 10:11:29.166493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.166556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.663 [2024-11-19 10:11:29.166626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.166716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.663 [2024-11-19 10:11:29.166788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.166859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.663 [2024-11-19 10:11:29.166948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.167030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.663 [2024-11-19 10:11:29.167102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.167164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.663 [2024-11-19 10:11:29.167225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.167294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.663 [2024-11-19 10:11:29.167356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.167418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.663 [2024-11-19 10:11:29.167491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.167554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.663 [2024-11-19 10:11:29.167626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.167688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.663 [2024-11-19 10:11:29.167758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.167828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.663 [2024-11-19 10:11:29.167901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.168000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.663 [2024-11-19 10:11:29.168109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.168178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.663 [2024-11-19 10:11:29.168242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.168304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.663 [2024-11-19 10:11:29.168366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.168427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.663 [2024-11-19 10:11:29.170136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.170228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.663 [2024-11-19 10:11:29.170309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.170373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.663 [2024-11-19 10:11:29.170435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.170496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.663 [2024-11-19 10:11:29.170557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.170618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.663 [2024-11-19 10:11:29.170693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.170756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.663 [2024-11-19 10:11:29.170841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.170905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.663 [2024-11-19 10:11:29.171007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.171088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.663 [2024-11-19 10:11:29.171162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.171227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.663 [2024-11-19 10:11:29.171299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.663 [2024-11-19 10:11:29.171364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.664 [2024-11-19 10:11:29.171435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.171499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.664 [2024-11-19 10:11:29.171571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.171644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.664 [2024-11-19 10:11:29.171707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.171770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.171849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.171922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.172019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.172102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.172181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.172247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.172310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.172382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.172446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.172509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.172571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.172633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.172696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.172773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.172847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.172910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.173003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.173069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.173145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.173208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.173269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.173329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.173403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.173466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.173535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.173597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.173676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.173768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.173831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.173931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.174039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.174107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.664 [2024-11-19 10:11:29.174170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.174232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.664 [2024-11-19 10:11:29.174294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.174379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.664 [2024-11-19 10:11:29.174454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.174516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.664 [2024-11-19 10:11:29.174586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.174656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.664 [2024-11-19 10:11:29.174726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.174786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.664 [2024-11-19 10:11:29.174857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.174945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.664 [2024-11-19 10:11:29.175040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.175107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.664 [2024-11-19 10:11:29.175169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.175231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.175292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.175353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.175429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.175502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.175589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.175654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.175717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.175780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.175856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.175944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.176032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.176130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.176196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.176258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.176320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.176382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.176466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.176531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.176594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.176665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.176754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.176816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.176893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.664 [2024-11-19 10:11:29.176991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.664 [2024-11-19 10:11:29.177075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:29.177141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:71552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.665 [2024-11-19 10:11:29.177221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:29.177302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:71560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.665 [2024-11-19 10:11:29.177375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:29.177460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de370 is same with the state(6) to be set 00:17:26.665 [2024-11-19 10:11:29.177549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.665 [2024-11-19 10:11:29.177622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.665 [2024-11-19 10:11:29.177695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71568 len:8 PRP1 0x0 PRP2 0x0 00:17:26.665 [2024-11-19 10:11:29.177754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:29.177888] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:17:26.665 [2024-11-19 10:11:29.178056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.665 [2024-11-19 10:11:29.178142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:29.178222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.665 [2024-11-19 10:11:29.178332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:29.178402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.665 [2024-11-19 10:11:29.178463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:29.178531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.665 [2024-11-19 10:11:29.178598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:29.178657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:17:26.665 [2024-11-19 10:11:29.178774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x233f710 (9): Bad file descriptor 00:17:26.665 [2024-11-19 10:11:29.182854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:17:26.665 [2024-11-19 10:11:29.206108] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:17:26.665 8285.80 IOPS, 32.37 MiB/s [2024-11-19T10:11:40.554Z] 8402.00 IOPS, 32.82 MiB/s [2024-11-19T10:11:40.554Z] 8481.71 IOPS, 33.13 MiB/s [2024-11-19T10:11:40.554Z] 8537.38 IOPS, 33.35 MiB/s [2024-11-19T10:11:40.554Z] 8570.11 IOPS, 33.48 MiB/s [2024-11-19T10:11:40.554Z] [2024-11-19 10:11:33.748113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.665 [2024-11-19 10:11:33.748195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.665 [2024-11-19 10:11:33.748243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.665 [2024-11-19 10:11:33.748275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.665 [2024-11-19 10:11:33.748305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.665 [2024-11-19 10:11:33.748366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.665 [2024-11-19 10:11:33.748397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.665 [2024-11-19 10:11:33.748426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.665 [2024-11-19 10:11:33.748456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.665 [2024-11-19 10:11:33.748485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.665 [2024-11-19 10:11:33.748515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.665 [2024-11-19 10:11:33.748545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.665 [2024-11-19 10:11:33.748574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.665 [2024-11-19 10:11:33.748603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.665 [2024-11-19 10:11:33.748633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.665 [2024-11-19 10:11:33.748662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.665 [2024-11-19 10:11:33.748691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.665 [2024-11-19 10:11:33.748731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.665 [2024-11-19 10:11:33.748765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.665 [2024-11-19 10:11:33.748795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.665 [2024-11-19 10:11:33.748826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.665 [2024-11-19 10:11:33.748857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.665 [2024-11-19 10:11:33.748887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.665 [2024-11-19 10:11:33.748933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.665 [2024-11-19 10:11:33.748966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.748983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.665 [2024-11-19 10:11:33.748998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.749014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.665 [2024-11-19 10:11:33.749028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.749044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.665 [2024-11-19 10:11:33.749059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.665 [2024-11-19 10:11:33.749075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.665 [2024-11-19 10:11:33.749090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.749121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.749160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.749193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.749224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.749254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.749285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.749316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.749346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.749377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.749407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.749438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.749469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.749499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.749529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.749568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.749599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.749630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.666 [2024-11-19 10:11:33.749661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.666 [2024-11-19 10:11:33.749692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.666 [2024-11-19 10:11:33.749724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.666 [2024-11-19 10:11:33.749754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.666 [2024-11-19 10:11:33.749784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.666 [2024-11-19 10:11:33.749814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.666 [2024-11-19 10:11:33.749845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.666 [2024-11-19 10:11:33.749875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.749905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.749958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.749975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.749990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.750007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.750021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.750037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.750052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.750068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.750082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.750098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.750112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.750128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.750143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.750158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.750173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.750189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.750204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.750220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.666 [2024-11-19 10:11:33.750235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.666 [2024-11-19 10:11:33.750250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.667 [2024-11-19 10:11:33.750265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.750281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.667 [2024-11-19 10:11:33.750295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.750311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.667 [2024-11-19 10:11:33.750326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.750342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.667 [2024-11-19 10:11:33.750363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.750380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.667 [2024-11-19 10:11:33.750395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.750412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.667 [2024-11-19 10:11:33.750426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.750442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.667 [2024-11-19 10:11:33.750456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.750472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.667 [2024-11-19 10:11:33.750487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.750503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.667 [2024-11-19 10:11:33.750518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.750533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.667 [2024-11-19 10:11:33.750548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.750564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.667 [2024-11-19 10:11:33.750578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.750594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.667 [2024-11-19 10:11:33.750609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.750624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.667 [2024-11-19 10:11:33.750639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.750655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.667 [2024-11-19 10:11:33.750669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.750692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.667 [2024-11-19 10:11:33.750707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.750723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.667 [2024-11-19 10:11:33.750738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.750761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.667 [2024-11-19 10:11:33.750776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.750792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.667 [2024-11-19 10:11:33.750806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.750823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.667 [2024-11-19 10:11:33.750838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.750854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.667 [2024-11-19 10:11:33.750868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.750884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.667 [2024-11-19 10:11:33.750899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.750926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.667 [2024-11-19 10:11:33.750943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.750959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.667 [2024-11-19 10:11:33.750974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.750990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.667 [2024-11-19 10:11:33.751004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.751020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.667 [2024-11-19 10:11:33.751034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.751050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.667 [2024-11-19 10:11:33.751064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.751080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.667 [2024-11-19 10:11:33.751095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.751110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.667 [2024-11-19 10:11:33.751125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.751141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.667 [2024-11-19 10:11:33.751162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.751179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.667 [2024-11-19 10:11:33.751193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.751215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.667 [2024-11-19 10:11:33.751229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.751246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.667 [2024-11-19 10:11:33.751260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.751276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:26.667 [2024-11-19 10:11:33.751290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.667 [2024-11-19 10:11:33.751306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.668 [2024-11-19 10:11:33.751320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.751336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.668 [2024-11-19 10:11:33.751350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.751366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.668 [2024-11-19 10:11:33.751381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.751398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.668 [2024-11-19 10:11:33.751420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.751436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.668 [2024-11-19 10:11:33.751451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.751467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.668 [2024-11-19 10:11:33.751481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.751497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.668 [2024-11-19 10:11:33.751511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.751527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23da9e0 is same with the state(6) to be set 00:17:26.668 [2024-11-19 10:11:33.751544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.668 [2024-11-19 10:11:33.751555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.668 [2024-11-19 10:11:33.751573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9744 len:8 PRP1 0x0 PRP2 0x0 00:17:26.668 [2024-11-19 10:11:33.751588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.751603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.668 [2024-11-19 10:11:33.751614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.668 [2024-11-19 10:11:33.751625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10200 len:8 PRP1 0x0 PRP2 0x0 00:17:26.668 [2024-11-19 10:11:33.751639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.751653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.668 [2024-11-19 10:11:33.751664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.668 [2024-11-19 10:11:33.751674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:8 PRP1 0x0 PRP2 0x0 00:17:26.668 [2024-11-19 10:11:33.751693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.751708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.668 [2024-11-19 10:11:33.751718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.668 [2024-11-19 10:11:33.751729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10216 len:8 PRP1 0x0 PRP2 0x0 00:17:26.668 [2024-11-19 10:11:33.751743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.751757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.668 [2024-11-19 10:11:33.751767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.668 [2024-11-19 10:11:33.751778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10224 len:8 PRP1 0x0 PRP2 0x0 00:17:26.668 [2024-11-19 10:11:33.751792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.751805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.668 [2024-11-19 10:11:33.751816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.668 [2024-11-19 10:11:33.751827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10232 len:8 PRP1 0x0 PRP2 0x0 00:17:26.668 [2024-11-19 10:11:33.751840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.751859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.668 [2024-11-19 10:11:33.751870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.668 [2024-11-19 10:11:33.751881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:8 PRP1 0x0 PRP2 0x0 00:17:26.668 [2024-11-19 10:11:33.751894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.751908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.668 [2024-11-19 10:11:33.751931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.668 [2024-11-19 10:11:33.751943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10248 len:8 PRP1 0x0 PRP2 0x0 00:17:26.668 [2024-11-19 10:11:33.751957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.751979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.668 [2024-11-19 10:11:33.751990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.668 [2024-11-19 10:11:33.752001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10256 len:8 PRP1 0x0 PRP2 0x0 00:17:26.668 [2024-11-19 10:11:33.752015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.752030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.668 [2024-11-19 10:11:33.752040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.668 [2024-11-19 10:11:33.752063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10264 len:8 PRP1 0x0 PRP2 0x0 00:17:26.668 [2024-11-19 10:11:33.752081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.752096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.668 [2024-11-19 10:11:33.752106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.668 [2024-11-19 10:11:33.752117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:8 PRP1 0x0 PRP2 0x0 00:17:26.668 [2024-11-19 10:11:33.752137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.752152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.668 [2024-11-19 10:11:33.752162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.668 [2024-11-19 10:11:33.752173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10280 len:8 PRP1 0x0 PRP2 0x0 00:17:26.668 [2024-11-19 10:11:33.752186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.752201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.668 [2024-11-19 10:11:33.752211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.668 [2024-11-19 10:11:33.752222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10288 len:8 PRP1 0x0 PRP2 0x0 00:17:26.668 [2024-11-19 10:11:33.752235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.752249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.668 [2024-11-19 10:11:33.752260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.668 [2024-11-19 10:11:33.752270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10296 len:8 PRP1 0x0 PRP2 0x0 00:17:26.668 [2024-11-19 10:11:33.752284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.752303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.668 [2024-11-19 10:11:33.752314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.668 [2024-11-19 10:11:33.752324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:8 PRP1 0x0 PRP2 0x0 00:17:26.668 [2024-11-19 10:11:33.752338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.752352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.668 [2024-11-19 10:11:33.752363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.668 [2024-11-19 10:11:33.752374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10312 len:8 PRP1 0x0 PRP2 0x0 00:17:26.668 [2024-11-19 10:11:33.752394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.752409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.668 [2024-11-19 10:11:33.752420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.668 [2024-11-19 10:11:33.752431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10320 len:8 PRP1 0x0 PRP2 0x0 00:17:26.668 [2024-11-19 10:11:33.752444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.752458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.668 [2024-11-19 10:11:33.752468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.668 [2024-11-19 10:11:33.752479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10328 len:8 PRP1 0x0 PRP2 0x0 00:17:26.668 [2024-11-19 10:11:33.752493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.752507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.668 [2024-11-19 10:11:33.752518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.668 [2024-11-19 10:11:33.752529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:8 PRP1 0x0 PRP2 0x0 00:17:26.668 [2024-11-19 10:11:33.752548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.668 [2024-11-19 10:11:33.752563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.669 [2024-11-19 10:11:33.752573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.669 [2024-11-19 10:11:33.752584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10344 len:8 PRP1 0x0 PRP2 0x0 00:17:26.669 [2024-11-19 10:11:33.752598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.669 [2024-11-19 10:11:33.752612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.669 [2024-11-19 10:11:33.752622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.669 [2024-11-19 10:11:33.752633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10352 len:8 PRP1 0x0 PRP2 0x0 00:17:26.669 [2024-11-19 10:11:33.752647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.669 [2024-11-19 10:11:33.752661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.669 [2024-11-19 10:11:33.752671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.669 [2024-11-19 10:11:33.752682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10360 len:8 PRP1 0x0 PRP2 0x0 00:17:26.669 [2024-11-19 10:11:33.752696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.669 [2024-11-19 10:11:33.752710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.669 [2024-11-19 10:11:33.752721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.669 [2024-11-19 10:11:33.752732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:8 PRP1 0x0 PRP2 0x0 00:17:26.669 [2024-11-19 10:11:33.752746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.669 [2024-11-19 10:11:33.752760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.669 [2024-11-19 10:11:33.752770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.669 [2024-11-19 10:11:33.752787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10376 len:8 PRP1 0x0 PRP2 0x0 00:17:26.669 [2024-11-19 10:11:33.752801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.669 [2024-11-19 10:11:33.752866] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:17:26.669 [2024-11-19 10:11:33.752937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.669 [2024-11-19 10:11:33.752960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.669 [2024-11-19 10:11:33.752977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.669 [2024-11-19 10:11:33.752991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.669 [2024-11-19 10:11:33.753007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.669 [2024-11-19 10:11:33.753020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.669 [2024-11-19 10:11:33.753035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.669 [2024-11-19 10:11:33.753049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.669 [2024-11-19 10:11:33.753063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:17:26.669 [2024-11-19 10:11:33.753114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x233f710 (9): Bad file descriptor 00:17:26.669 [2024-11-19 10:11:33.756926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:17:26.669 [2024-11-19 10:11:33.788213] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:17:26.669 8569.10 IOPS, 33.47 MiB/s [2024-11-19T10:11:40.558Z] 8629.36 IOPS, 33.71 MiB/s [2024-11-19T10:11:40.558Z] 8630.25 IOPS, 33.71 MiB/s [2024-11-19T10:11:40.558Z] 8680.85 IOPS, 33.91 MiB/s [2024-11-19T10:11:40.558Z] 8724.50 IOPS, 34.08 MiB/s [2024-11-19T10:11:40.558Z] 8761.80 IOPS, 34.23 MiB/s 00:17:26.669 Latency(us) 00:17:26.669 [2024-11-19T10:11:40.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.669 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:26.669 Verification LBA range: start 0x0 length 0x4000 00:17:26.669 NVMe0n1 : 15.01 8761.97 34.23 232.72 0.00 14198.15 662.81 28955.00 00:17:26.669 [2024-11-19T10:11:40.558Z] =================================================================================================================== 00:17:26.669 [2024-11-19T10:11:40.558Z] Total : 8761.97 34.23 232.72 0.00 14198.15 662.81 28955.00 00:17:26.669 Received shutdown signal, test time was about 15.000000 seconds 00:17:26.669 00:17:26.669 Latency(us) 00:17:26.669 [2024-11-19T10:11:40.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.669 [2024-11-19T10:11:40.558Z] =================================================================================================================== 00:17:26.669 [2024-11-19T10:11:40.558Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:26.669 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:17:26.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:26.669 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:17:26.669 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:17:26.669 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75508 00:17:26.669 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:17:26.669 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75508 /var/tmp/bdevperf.sock 00:17:26.669 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75508 ']' 00:17:26.669 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:26.669 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:26.669 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:26.669 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:26.669 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:26.669 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:26.669 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:17:26.669 10:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:26.669 [2024-11-19 10:11:40.277327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:26.669 10:11:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:17:26.929 [2024-11-19 10:11:40.589597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:17:26.929 10:11:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:27.188 NVMe0n1 00:17:27.188 10:11:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:27.756 00:17:27.756 10:11:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:28.014 00:17:28.014 10:11:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:28.014 10:11:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:17:28.273 10:11:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:28.531 10:11:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:17:31.811 10:11:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:31.811 10:11:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:17:31.811 10:11:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75584 00:17:31.811 10:11:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:31.811 10:11:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75584 00:17:33.188 { 00:17:33.188 "results": [ 00:17:33.188 { 00:17:33.188 "job": "NVMe0n1", 00:17:33.188 "core_mask": "0x1", 00:17:33.188 "workload": "verify", 00:17:33.188 "status": "finished", 00:17:33.188 "verify_range": { 00:17:33.188 "start": 0, 00:17:33.188 "length": 16384 00:17:33.188 }, 00:17:33.188 "queue_depth": 128, 00:17:33.188 "io_size": 4096, 00:17:33.188 "runtime": 1.010456, 00:17:33.188 "iops": 6861.258679249764, 00:17:33.188 "mibps": 26.80179171581939, 00:17:33.188 "io_failed": 0, 00:17:33.188 "io_timeout": 0, 00:17:33.188 "avg_latency_us": 18576.7636814707, 00:17:33.188 "min_latency_us": 2278.8654545454547, 00:17:33.188 "max_latency_us": 15847.796363636364 00:17:33.188 } 00:17:33.188 ], 00:17:33.188 "core_count": 1 00:17:33.188 } 00:17:33.188 10:11:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:33.188 [2024-11-19 10:11:39.659848] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:17:33.188 [2024-11-19 10:11:39.659986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75508 ] 00:17:33.188 [2024-11-19 10:11:39.810469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.188 [2024-11-19 10:11:39.878632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.188 [2024-11-19 10:11:39.934688] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:33.188 [2024-11-19 10:11:42.291094] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:17:33.188 [2024-11-19 10:11:42.291232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:33.188 [2024-11-19 10:11:42.291260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.188 [2024-11-19 10:11:42.291279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:33.188 [2024-11-19 10:11:42.291294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.188 [2024-11-19 10:11:42.291308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:33.188 [2024-11-19 10:11:42.291322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.188 [2024-11-19 10:11:42.291336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:33.188 [2024-11-19 10:11:42.291350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.188 [2024-11-19 10:11:42.291364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:17:33.188 [2024-11-19 10:11:42.291418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:17:33.188 [2024-11-19 10:11:42.291451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18dc710 (9): Bad file descriptor 00:17:33.188 [2024-11-19 10:11:42.302008] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:17:33.188 Running I/O for 1 seconds... 00:17:33.188 6805.00 IOPS, 26.58 MiB/s 00:17:33.188 Latency(us) 00:17:33.188 [2024-11-19T10:11:47.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.188 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:33.188 Verification LBA range: start 0x0 length 0x4000 00:17:33.188 NVMe0n1 : 1.01 6861.26 26.80 0.00 0.00 18576.76 2278.87 15847.80 00:17:33.188 [2024-11-19T10:11:47.077Z] =================================================================================================================== 00:17:33.188 [2024-11-19T10:11:47.078Z] Total : 6861.26 26.80 0.00 0.00 18576.76 2278.87 15847.80 00:17:33.189 10:11:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:33.189 10:11:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:17:33.447 10:11:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:33.705 10:11:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:33.705 10:11:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:17:33.963 10:11:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:34.222 10:11:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:17:37.504 10:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:17:37.504 10:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:37.504 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75508 00:17:37.504 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75508 ']' 00:17:37.504 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75508 00:17:37.504 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:17:37.504 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:37.504 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75508 00:17:37.504 killing process with pid 75508 00:17:37.504 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:37.504 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:37.504 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75508' 00:17:37.504 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75508 00:17:37.504 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75508 00:17:37.763 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:17:37.763 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:38.026 rmmod nvme_tcp 00:17:38.026 rmmod nvme_fabrics 00:17:38.026 rmmod nvme_keyring 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75253 ']' 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75253 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75253 ']' 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75253 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75253 00:17:38.026 killing process with pid 75253 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75253' 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75253 00:17:38.026 10:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75253 00:17:38.292 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:38.292 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:38.292 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:38.292 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:17:38.292 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:17:38.292 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:38.292 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:17:38.292 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:38.292 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:38.292 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:38.292 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:38.292 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:38.551 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:38.551 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:38.551 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:38.551 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:38.551 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:38.551 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:38.551 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:38.551 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:38.551 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:38.551 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:38.551 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:38.551 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.551 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.551 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.551 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:17:38.551 00:17:38.551 real 0m33.105s 00:17:38.551 user 2m7.796s 00:17:38.551 sys 0m5.993s 00:17:38.551 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:38.551 10:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:38.551 ************************************ 00:17:38.551 END TEST nvmf_failover 00:17:38.551 ************************************ 00:17:38.551 10:11:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:38.551 10:11:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:38.551 10:11:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:38.551 10:11:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.551 ************************************ 00:17:38.551 START TEST nvmf_host_discovery 00:17:38.551 ************************************ 00:17:38.551 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:38.811 * Looking for test storage... 00:17:38.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:38.811 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:38.811 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:17:38.811 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:38.811 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:38.811 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:38.811 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:38.811 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:38.811 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:38.811 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:38.811 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:38.811 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:38.811 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:38.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.812 --rc genhtml_branch_coverage=1 00:17:38.812 --rc genhtml_function_coverage=1 00:17:38.812 --rc genhtml_legend=1 00:17:38.812 --rc geninfo_all_blocks=1 00:17:38.812 --rc geninfo_unexecuted_blocks=1 00:17:38.812 00:17:38.812 ' 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:38.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.812 --rc genhtml_branch_coverage=1 00:17:38.812 --rc genhtml_function_coverage=1 00:17:38.812 --rc genhtml_legend=1 00:17:38.812 --rc geninfo_all_blocks=1 00:17:38.812 --rc geninfo_unexecuted_blocks=1 00:17:38.812 00:17:38.812 ' 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:38.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.812 --rc genhtml_branch_coverage=1 00:17:38.812 --rc genhtml_function_coverage=1 00:17:38.812 --rc genhtml_legend=1 00:17:38.812 --rc geninfo_all_blocks=1 00:17:38.812 --rc geninfo_unexecuted_blocks=1 00:17:38.812 00:17:38.812 ' 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:38.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.812 --rc genhtml_branch_coverage=1 00:17:38.812 --rc genhtml_function_coverage=1 00:17:38.812 --rc genhtml_legend=1 00:17:38.812 --rc geninfo_all_blocks=1 00:17:38.812 --rc geninfo_unexecuted_blocks=1 00:17:38.812 00:17:38.812 ' 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.812 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:38.813 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:38.813 Cannot find device "nvmf_init_br" 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:38.813 Cannot find device "nvmf_init_br2" 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:17:38.813 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:39.073 Cannot find device "nvmf_tgt_br" 00:17:39.073 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:17:39.073 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:39.073 Cannot find device "nvmf_tgt_br2" 00:17:39.073 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:17:39.073 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:39.073 Cannot find device "nvmf_init_br" 00:17:39.073 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:17:39.073 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:39.073 Cannot find device "nvmf_init_br2" 00:17:39.073 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:17:39.073 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:39.073 Cannot find device "nvmf_tgt_br" 00:17:39.073 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:17:39.073 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:39.073 Cannot find device "nvmf_tgt_br2" 00:17:39.073 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:17:39.073 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:39.073 Cannot find device "nvmf_br" 00:17:39.073 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:17:39.073 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:39.074 Cannot find device "nvmf_init_if" 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:39.074 Cannot find device "nvmf_init_if2" 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:39.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:39.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:39.074 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:39.334 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:39.334 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:39.334 10:11:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:39.334 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:39.334 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.122 ms 00:17:39.334 00:17:39.334 --- 10.0.0.3 ping statistics --- 00:17:39.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.334 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:39.334 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:39.334 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:17:39.334 00:17:39.334 --- 10.0.0.4 ping statistics --- 00:17:39.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.334 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:39.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:39.334 00:17:39.334 --- 10.0.0.1 ping statistics --- 00:17:39.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.334 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:39.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:17:39.334 00:17:39.334 --- 10.0.0.2 ping statistics --- 00:17:39.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.334 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75904 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75904 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75904 ']' 00:17:39.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:39.334 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:39.334 [2024-11-19 10:11:53.150857] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:17:39.334 [2024-11-19 10:11:53.151262] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.594 [2024-11-19 10:11:53.304519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.594 [2024-11-19 10:11:53.370942] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.594 [2024-11-19 10:11:53.371494] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.594 [2024-11-19 10:11:53.371612] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.594 [2024-11-19 10:11:53.371711] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.594 [2024-11-19 10:11:53.371981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.594 [2024-11-19 10:11:53.372617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.594 [2024-11-19 10:11:53.433620] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:39.853 [2024-11-19 10:11:53.562086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:39.853 [2024-11-19 10:11:53.570239] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:39.853 null0 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:39.853 null1 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:39.853 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75933 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75933 /tmp/host.sock 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75933 ']' 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:39.853 10:11:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:39.853 [2024-11-19 10:11:53.688900] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:17:39.853 [2024-11-19 10:11:53.689325] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75933 ] 00:17:40.112 [2024-11-19 10:11:53.847326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.112 [2024-11-19 10:11:53.918412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.112 [2024-11-19 10:11:53.985531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:40.371 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:40.372 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:40.372 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.631 [2024-11-19 10:11:54.430433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:40.631 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:17:40.890 10:11:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:17:41.457 [2024-11-19 10:11:55.068246] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:41.457 [2024-11-19 10:11:55.068288] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:41.457 [2024-11-19 10:11:55.068311] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:41.457 [2024-11-19 10:11:55.074297] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:17:41.457 [2024-11-19 10:11:55.129047] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:17:41.457 [2024-11-19 10:11:55.130187] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1fa2e60:1 started. 00:17:41.457 [2024-11-19 10:11:55.132203] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:41.457 [2024-11-19 10:11:55.132234] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:41.457 [2024-11-19 10:11:55.136796] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1fa2e60 was disconnected and freed. delete nvme_qpair. 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.026 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.309 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.309 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:42.309 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:42.309 [2024-11-19 10:11:55.920907] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1fb1000:1 started. 00:17:42.309 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:42.309 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:42.309 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:42.309 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:42.309 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:42.309 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:42.309 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:42.309 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.309 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:42.309 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.309 [2024-11-19 10:11:55.927203] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1fb1000 was disconnected and freed. delete nvme_qpair. 00:17:42.309 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.309 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:42.309 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:42.309 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:17:42.309 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:42.309 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:42.309 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:42.310 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:42.310 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:42.310 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:42.310 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:42.310 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:17:42.310 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:42.310 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.310 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.310 10:11:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.310 [2024-11-19 10:11:56.052020] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:42.310 [2024-11-19 10:11:56.053225] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:17:42.310 [2024-11-19 10:11:56.053258] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:42.310 [2024-11-19 10:11:56.059215] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:42.310 [2024-11-19 10:11:56.122163] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:17:42.310 [2024-11-19 10:11:56.122228] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:42.310 [2024-11-19 10:11:56.122241] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:42.310 [2024-11-19 10:11:56.122248] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:42.310 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.570 [2024-11-19 10:11:56.268516] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:17:42.570 [2024-11-19 10:11:56.268553] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:42.570 [2024-11-19 10:11:56.269231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:42.570 [2024-11-19 10:11:56.269267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.570 [2024-11-19 10:11:56.269282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:42.570 [2024-11-19 10:11:56.269291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.570 [2024-11-19 10:11:56.269301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:42.570 [2024-11-19 10:11:56.269310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.570 [2024-11-19 10:11:56.269320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:42.570 [2024-11-19 10:11:56.269330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.570 [2024-11-19 10:11:56.269339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7f230 is same with the state(6) to be set 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:42.570 [2024-11-19 10:11:56.274468] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:17:42.570 [2024-11-19 10:11:56.274499] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:17:42.570 [2024-11-19 10:11:56.274579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7f230 (9): Bad file descriptor 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.570 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.571 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.831 10:11:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.212 [2024-11-19 10:11:57.719172] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:44.212 [2024-11-19 10:11:57.719218] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:44.212 [2024-11-19 10:11:57.719238] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:44.212 [2024-11-19 10:11:57.725206] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:17:44.212 [2024-11-19 10:11:57.783568] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:17:44.212 [2024-11-19 10:11:57.784757] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1fa2470:1 started. 00:17:44.212 [2024-11-19 10:11:57.787155] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:44.212 [2024-11-19 10:11:57.787364] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.212 [2024-11-19 10:11:57.789002] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1fa2470 was disconnected and freed. delete nvme_qpair. 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.212 request: 00:17:44.212 { 00:17:44.212 "name": "nvme", 00:17:44.212 "trtype": "tcp", 00:17:44.212 "traddr": "10.0.0.3", 00:17:44.212 "adrfam": "ipv4", 00:17:44.212 "trsvcid": "8009", 00:17:44.212 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:44.212 "wait_for_attach": true, 00:17:44.212 "method": "bdev_nvme_start_discovery", 00:17:44.212 "req_id": 1 00:17:44.212 } 00:17:44.212 Got JSON-RPC error response 00:17:44.212 response: 00:17:44.212 { 00:17:44.212 "code": -17, 00:17:44.212 "message": "File exists" 00:17:44.212 } 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.212 request: 00:17:44.212 { 00:17:44.212 "name": "nvme_second", 00:17:44.212 "trtype": "tcp", 00:17:44.212 "traddr": "10.0.0.3", 00:17:44.212 "adrfam": "ipv4", 00:17:44.212 "trsvcid": "8009", 00:17:44.212 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:44.212 "wait_for_attach": true, 00:17:44.212 "method": "bdev_nvme_start_discovery", 00:17:44.212 "req_id": 1 00:17:44.212 } 00:17:44.212 Got JSON-RPC error response 00:17:44.212 response: 00:17:44.212 { 00:17:44.212 "code": -17, 00:17:44.212 "message": "File exists" 00:17:44.212 } 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:44.212 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:17:44.213 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:44.213 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:44.213 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:44.213 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:17:44.213 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:44.213 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.213 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.213 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:44.213 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:44.213 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:44.213 10:11:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.213 10:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:17:44.213 10:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:17:44.213 10:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:44.213 10:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.213 10:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:44.213 10:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.213 10:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:44.213 10:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:44.213 10:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.213 10:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:44.213 10:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:44.213 10:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:17:44.213 10:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:44.213 10:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:44.213 10:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.213 10:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:44.213 10:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.213 10:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:44.213 10:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.213 10:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.588 [2024-11-19 10:11:59.075855] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:45.588 [2024-11-19 10:11:59.075982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fa4370 with addr=10.0.0.3, port=8010 00:17:45.588 [2024-11-19 10:11:59.076012] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:45.588 [2024-11-19 10:11:59.076024] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:45.588 [2024-11-19 10:11:59.076034] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:17:46.524 [2024-11-19 10:12:00.075862] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:46.524 [2024-11-19 10:12:00.075960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fa4370 with addr=10.0.0.3, port=8010 00:17:46.524 [2024-11-19 10:12:00.075987] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:46.524 [2024-11-19 10:12:00.075998] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:46.524 [2024-11-19 10:12:00.076009] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:17:47.460 [2024-11-19 10:12:01.075684] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:17:47.460 request: 00:17:47.460 { 00:17:47.460 "name": "nvme_second", 00:17:47.461 "trtype": "tcp", 00:17:47.461 "traddr": "10.0.0.3", 00:17:47.461 "adrfam": "ipv4", 00:17:47.461 "trsvcid": "8010", 00:17:47.461 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:47.461 "wait_for_attach": false, 00:17:47.461 "attach_timeout_ms": 3000, 00:17:47.461 "method": "bdev_nvme_start_discovery", 00:17:47.461 "req_id": 1 00:17:47.461 } 00:17:47.461 Got JSON-RPC error response 00:17:47.461 response: 00:17:47.461 { 00:17:47.461 "code": -110, 00:17:47.461 "message": "Connection timed out" 00:17:47.461 } 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75933 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:47.461 rmmod nvme_tcp 00:17:47.461 rmmod nvme_fabrics 00:17:47.461 rmmod nvme_keyring 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75904 ']' 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75904 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 75904 ']' 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 75904 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75904 00:17:47.461 killing process with pid 75904 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75904' 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 75904 00:17:47.461 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 75904 00:17:47.720 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:47.720 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:47.720 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:47.720 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:17:47.720 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:47.720 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:17:47.720 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:17:47.720 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:47.720 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:47.720 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:47.720 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:47.720 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:47.720 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:47.979 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:47.980 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:47.980 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:47.980 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:47.980 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:47.980 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:47.980 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:47.980 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:47.980 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:47.980 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:47.980 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.980 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:47.980 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.980 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:17:47.980 00:17:47.980 real 0m9.374s 00:17:47.980 user 0m17.655s 00:17:47.980 sys 0m2.051s 00:17:47.980 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.980 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:47.980 ************************************ 00:17:47.980 END TEST nvmf_host_discovery 00:17:47.980 ************************************ 00:17:47.980 10:12:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:47.980 10:12:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:47.980 10:12:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.980 10:12:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.980 ************************************ 00:17:47.980 START TEST nvmf_host_multipath_status 00:17:47.980 ************************************ 00:17:47.980 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:48.240 * Looking for test storage... 00:17:48.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:48.240 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:48.240 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:17:48.240 10:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:48.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.240 --rc genhtml_branch_coverage=1 00:17:48.240 --rc genhtml_function_coverage=1 00:17:48.240 --rc genhtml_legend=1 00:17:48.240 --rc geninfo_all_blocks=1 00:17:48.240 --rc geninfo_unexecuted_blocks=1 00:17:48.240 00:17:48.240 ' 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:48.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.240 --rc genhtml_branch_coverage=1 00:17:48.240 --rc genhtml_function_coverage=1 00:17:48.240 --rc genhtml_legend=1 00:17:48.240 --rc geninfo_all_blocks=1 00:17:48.240 --rc geninfo_unexecuted_blocks=1 00:17:48.240 00:17:48.240 ' 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:48.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.240 --rc genhtml_branch_coverage=1 00:17:48.240 --rc genhtml_function_coverage=1 00:17:48.240 --rc genhtml_legend=1 00:17:48.240 --rc geninfo_all_blocks=1 00:17:48.240 --rc geninfo_unexecuted_blocks=1 00:17:48.240 00:17:48.240 ' 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:48.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.240 --rc genhtml_branch_coverage=1 00:17:48.240 --rc genhtml_function_coverage=1 00:17:48.240 --rc genhtml_legend=1 00:17:48.240 --rc geninfo_all_blocks=1 00:17:48.240 --rc geninfo_unexecuted_blocks=1 00:17:48.240 00:17:48.240 ' 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.240 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:48.241 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:48.241 Cannot find device "nvmf_init_br" 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:48.241 Cannot find device "nvmf_init_br2" 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:48.241 Cannot find device "nvmf_tgt_br" 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:17:48.241 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:48.500 Cannot find device "nvmf_tgt_br2" 00:17:48.500 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:17:48.500 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:48.500 Cannot find device "nvmf_init_br" 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:48.501 Cannot find device "nvmf_init_br2" 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:48.501 Cannot find device "nvmf_tgt_br" 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:48.501 Cannot find device "nvmf_tgt_br2" 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:48.501 Cannot find device "nvmf_br" 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:48.501 Cannot find device "nvmf_init_if" 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:48.501 Cannot find device "nvmf_init_if2" 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:48.501 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:48.501 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:48.501 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:48.760 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:48.760 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:48.760 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:48.760 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:48.760 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:48.760 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:48.760 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:48.760 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:48.760 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:48.760 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:48.760 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:17:48.760 00:17:48.760 --- 10.0.0.3 ping statistics --- 00:17:48.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.760 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:17:48.760 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:48.760 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:48.760 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:17:48.760 00:17:48.760 --- 10.0.0.4 ping statistics --- 00:17:48.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.760 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:17:48.760 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:48.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:48.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:17:48.761 00:17:48.761 --- 10.0.0.1 ping statistics --- 00:17:48.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.761 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:48.761 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:48.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:48.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:17:48.761 00:17:48.761 --- 10.0.0.2 ping statistics --- 00:17:48.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.761 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:48.761 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:48.761 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:17:48.761 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:48.761 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:48.761 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:48.761 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:48.761 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:48.761 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:48.761 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:48.761 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:17:48.761 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:48.761 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:48.761 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:48.761 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76436 00:17:48.761 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76436 00:17:48.761 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:48.761 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76436 ']' 00:17:48.761 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.761 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.761 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.761 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.761 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:48.761 [2024-11-19 10:12:02.539263] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:17:48.761 [2024-11-19 10:12:02.539365] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.020 [2024-11-19 10:12:02.689588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:49.020 [2024-11-19 10:12:02.757897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.020 [2024-11-19 10:12:02.757979] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.020 [2024-11-19 10:12:02.758006] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.020 [2024-11-19 10:12:02.758017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.020 [2024-11-19 10:12:02.758027] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.020 [2024-11-19 10:12:02.759331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.020 [2024-11-19 10:12:02.759345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.020 [2024-11-19 10:12:02.818408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:49.020 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.020 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:17:49.020 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:49.020 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:49.020 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:49.279 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.279 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76436 00:17:49.279 10:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:49.538 [2024-11-19 10:12:03.226229] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.538 10:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:49.797 Malloc0 00:17:49.797 10:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:50.056 10:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:50.316 10:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:50.575 [2024-11-19 10:12:04.459872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:50.836 10:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:50.836 [2024-11-19 10:12:04.720021] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:51.095 10:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76484 00:17:51.095 10:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:51.095 10:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:51.095 10:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76484 /var/tmp/bdevperf.sock 00:17:51.095 10:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76484 ']' 00:17:51.095 10:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:51.095 10:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:51.095 10:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:51.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:51.095 10:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:51.095 10:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:52.029 10:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:52.030 10:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:17:52.030 10:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:52.599 10:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:52.858 Nvme0n1 00:17:52.858 10:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:53.117 Nvme0n1 00:17:53.117 10:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:17:53.117 10:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:55.021 10:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:17:55.021 10:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:17:55.588 10:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:55.848 10:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:17:56.785 10:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:17:56.785 10:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:56.785 10:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:56.785 10:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:57.044 10:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:57.044 10:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:57.044 10:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.044 10:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:57.302 10:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:57.302 10:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:57.302 10:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:57.302 10:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.560 10:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:57.560 10:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:57.560 10:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.560 10:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:57.819 10:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:57.819 10:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:57.819 10:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.819 10:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:58.078 10:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:58.078 10:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:58.337 10:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:58.337 10:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:58.596 10:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:58.596 10:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:17:58.596 10:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:58.854 10:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:59.113 10:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:18:00.049 10:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:18:00.049 10:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:00.049 10:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:00.049 10:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:00.309 10:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:00.309 10:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:00.309 10:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:00.309 10:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:00.876 10:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:00.876 10:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:00.876 10:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:00.876 10:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:01.135 10:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:01.135 10:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:01.136 10:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:01.136 10:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:01.394 10:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:01.394 10:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:01.394 10:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:01.394 10:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:01.653 10:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:01.653 10:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:01.653 10:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:01.653 10:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:01.928 10:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:01.928 10:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:18:01.928 10:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:02.191 10:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:18:02.449 10:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:18:03.384 10:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:18:03.384 10:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:03.384 10:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:03.384 10:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:03.643 10:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:03.643 10:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:03.643 10:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:03.643 10:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:04.250 10:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:04.250 10:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:04.250 10:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:04.250 10:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:04.250 10:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:04.250 10:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:04.250 10:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:04.250 10:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:04.549 10:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:04.549 10:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:04.549 10:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:04.549 10:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:04.808 10:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:04.808 10:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:04.808 10:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:04.808 10:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:05.066 10:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:05.067 10:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:18:05.067 10:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:05.633 10:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:05.891 10:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:18:06.828 10:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:18:06.828 10:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:06.828 10:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:06.828 10:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:07.087 10:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:07.087 10:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:07.087 10:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:07.087 10:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:07.345 10:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:07.345 10:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:07.345 10:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:07.345 10:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:07.604 10:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:07.604 10:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:07.604 10:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:07.604 10:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:08.173 10:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:08.173 10:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:08.173 10:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:08.173 10:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:08.173 10:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:08.173 10:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:08.173 10:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:08.173 10:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:08.739 10:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:08.740 10:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:18:08.740 10:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:08.998 10:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:09.257 10:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:18:10.193 10:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:18:10.193 10:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:10.193 10:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:10.193 10:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:10.451 10:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:10.451 10:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:10.451 10:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:10.451 10:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:10.709 10:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:10.709 10:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:10.709 10:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:10.709 10:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:11.345 10:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:11.345 10:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:11.345 10:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:11.345 10:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:11.345 10:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:11.345 10:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:11.345 10:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:11.345 10:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:11.619 10:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:11.619 10:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:11.619 10:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:11.619 10:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:11.878 10:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:11.878 10:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:18:11.878 10:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:12.446 10:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:12.704 10:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:18:13.640 10:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:18:13.640 10:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:13.640 10:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:13.640 10:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:13.899 10:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:13.899 10:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:13.899 10:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:13.899 10:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:14.158 10:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:14.158 10:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:14.158 10:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:14.158 10:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:14.416 10:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:14.416 10:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:14.416 10:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:14.416 10:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:14.983 10:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:14.983 10:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:14.983 10:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:14.983 10:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:15.242 10:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:15.242 10:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:15.242 10:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:15.242 10:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:15.501 10:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:15.501 10:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:18:15.760 10:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:18:15.760 10:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:18:16.020 10:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:16.279 10:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:18:17.216 10:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:18:17.216 10:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:17.216 10:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:17.216 10:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:17.475 10:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:17.475 10:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:17.475 10:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:17.475 10:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:18.043 10:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:18.043 10:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:18.043 10:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:18.043 10:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:18.301 10:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:18.301 10:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:18.301 10:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:18.301 10:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:18.560 10:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:18.560 10:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:18.560 10:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:18.560 10:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:18.818 10:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:18.818 10:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:18.818 10:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:18.818 10:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:19.076 10:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:19.077 10:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:18:19.077 10:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:19.335 10:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:19.594 10:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:18:20.970 10:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:18:20.970 10:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:20.970 10:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:20.970 10:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:20.970 10:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:20.970 10:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:20.970 10:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:20.970 10:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:21.235 10:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:21.235 10:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:21.235 10:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:21.235 10:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:21.506 10:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:21.506 10:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:21.506 10:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:21.506 10:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:21.765 10:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:21.765 10:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:21.765 10:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:21.765 10:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:22.333 10:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:22.333 10:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:22.333 10:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:22.333 10:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:22.592 10:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:22.592 10:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:18:22.592 10:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:22.851 10:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:18:23.110 10:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:18:24.487 10:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:18:24.487 10:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:24.487 10:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:24.487 10:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:24.487 10:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:24.487 10:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:24.487 10:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:24.487 10:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:24.745 10:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:24.745 10:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:24.745 10:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:24.745 10:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:25.004 10:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:25.004 10:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:25.004 10:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:25.004 10:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:25.263 10:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:25.263 10:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:25.263 10:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:25.263 10:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:25.521 10:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:25.521 10:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:25.521 10:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:25.521 10:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.088 10:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:26.088 10:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:18:26.088 10:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:26.348 10:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:26.605 10:12:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:18:27.544 10:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:18:27.544 10:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:27.544 10:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:27.544 10:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:27.802 10:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:27.802 10:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:27.802 10:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:27.802 10:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:28.060 10:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:28.060 10:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:28.060 10:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:28.060 10:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:28.626 10:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:28.626 10:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:28.626 10:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:28.626 10:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:28.885 10:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:28.885 10:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:28.885 10:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:28.885 10:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:29.144 10:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:29.144 10:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:29.144 10:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:29.144 10:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:29.404 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:29.404 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76484 00:18:29.404 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76484 ']' 00:18:29.404 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76484 00:18:29.666 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:18:29.666 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:29.666 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76484 00:18:29.666 killing process with pid 76484 00:18:29.666 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:29.666 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:29.666 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76484' 00:18:29.666 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76484 00:18:29.666 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76484 00:18:29.666 { 00:18:29.666 "results": [ 00:18:29.666 { 00:18:29.666 "job": "Nvme0n1", 00:18:29.666 "core_mask": "0x4", 00:18:29.666 "workload": "verify", 00:18:29.666 "status": "terminated", 00:18:29.666 "verify_range": { 00:18:29.666 "start": 0, 00:18:29.666 "length": 16384 00:18:29.666 }, 00:18:29.666 "queue_depth": 128, 00:18:29.666 "io_size": 4096, 00:18:29.666 "runtime": 36.308257, 00:18:29.666 "iops": 7858.405320861312, 00:18:29.666 "mibps": 30.6968957846145, 00:18:29.666 "io_failed": 0, 00:18:29.666 "io_timeout": 0, 00:18:29.666 "avg_latency_us": 16254.511118045608, 00:18:29.666 "min_latency_us": 621.8472727272728, 00:18:29.666 "max_latency_us": 4026531.84 00:18:29.666 } 00:18:29.666 ], 00:18:29.666 "core_count": 1 00:18:29.666 } 00:18:29.666 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76484 00:18:29.666 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:29.666 [2024-11-19 10:12:04.800809] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:18:29.667 [2024-11-19 10:12:04.800990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76484 ] 00:18:29.667 [2024-11-19 10:12:04.955679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.667 [2024-11-19 10:12:05.023981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.667 [2024-11-19 10:12:05.082216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:29.667 Running I/O for 90 seconds... 00:18:29.667 6804.00 IOPS, 26.58 MiB/s [2024-11-19T10:12:43.556Z] 6794.50 IOPS, 26.54 MiB/s [2024-11-19T10:12:43.556Z] 6747.67 IOPS, 26.36 MiB/s [2024-11-19T10:12:43.556Z] 6693.00 IOPS, 26.14 MiB/s [2024-11-19T10:12:43.556Z] 6858.20 IOPS, 26.79 MiB/s [2024-11-19T10:12:43.556Z] 7258.83 IOPS, 28.35 MiB/s [2024-11-19T10:12:43.556Z] 7544.14 IOPS, 29.47 MiB/s [2024-11-19T10:12:43.556Z] 7769.12 IOPS, 30.35 MiB/s [2024-11-19T10:12:43.556Z] 7941.33 IOPS, 31.02 MiB/s [2024-11-19T10:12:43.556Z] 7894.70 IOPS, 30.84 MiB/s [2024-11-19T10:12:43.556Z] 7828.55 IOPS, 30.58 MiB/s [2024-11-19T10:12:43.556Z] 7773.50 IOPS, 30.37 MiB/s [2024-11-19T10:12:43.556Z] 7698.46 IOPS, 30.07 MiB/s [2024-11-19T10:12:43.556Z] 7624.00 IOPS, 29.78 MiB/s [2024-11-19T10:12:43.556Z] 7559.53 IOPS, 29.53 MiB/s [2024-11-19T10:12:43.556Z] [2024-11-19 10:12:22.666448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.667 [2024-11-19 10:12:22.666531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.666611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.666640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.666671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.666691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.666718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.666738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.666765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.666785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.666812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.666842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.666868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.666888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.666931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.666955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.666990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.667012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.667071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.667093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.667121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.667140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.667167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.667187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.667214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.667241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.667267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.667287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.667313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.667333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.667361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.667381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.667429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.667455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.667483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.667503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.667530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.667550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.667576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.667596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.667622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.667642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.667683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.667705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.667732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.667752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.667779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.667798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.667825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.667865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.667892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.667926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.667960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.667980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.668007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.668027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.668054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.668074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.668115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.668138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.668166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.668186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.668225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.668247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.668291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.668316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:29.667 [2024-11-19 10:12:22.668345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.667 [2024-11-19 10:12:22.668378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.668407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.668438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.668465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.668486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.668513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.668533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.668564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.668583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.668611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.668631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.668658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.668678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.668705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.668725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.668752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.668773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.668800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.668820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.668867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.668887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.668927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.668951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.668979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.669011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.669040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.669061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.669090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.669110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.669154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.669179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.669216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.669236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.669263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.669283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.669310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.669329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.669356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.669376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.669403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.669422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.669449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.669481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.669517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.669537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.669564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.669583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.669610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.669630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.669669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.669691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.669719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.669739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.669776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.669796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.669822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.669842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.669869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.669889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.669930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.669954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.669982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.670002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.670028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.670048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.670074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.670094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.670121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.670140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.670167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.670186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.670213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.670232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.670270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.670291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.670318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.668 [2024-11-19 10:12:22.670338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:29.668 [2024-11-19 10:12:22.670396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.670431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.670460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.670480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.670508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.670529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.670556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.670575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.670614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.670633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.670660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.670680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.670706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.670726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.670752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.670772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.670799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.670818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.670845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.670864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.670891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.670939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.670970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.670991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.671017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.671037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.671064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.671084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.671110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.671129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.671155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.671175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.671202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.671222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.671249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.671270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.671298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.671318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.671344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.671364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.671390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.671420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.671447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.671467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.671493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.671523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.671552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.671572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.672864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.672897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.672952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.672979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.673013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.673035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.673068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.673089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.673122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.673143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.673176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.673197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.673230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.673250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.673283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.673303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.673337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.673358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.673411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.673433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.673467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.673487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.673546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.673568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.673602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.673623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.673655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.673676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.673708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.669 [2024-11-19 10:12:22.673729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:29.669 [2024-11-19 10:12:22.673762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:22.673782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:22.673816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:22.673847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:22.673880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:22.673901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:22.673951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:22.673975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:22.674008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:22.674029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:22.674062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:22.674082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:22.674115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:22.674135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:22.674169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:22.674189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:22.674234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:22.674256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:22.674289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:22.674310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:22.674349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:22.674370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:22.674408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:22.674438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:22.674471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:22.674502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:22.674536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:22.674556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:22.674589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:22.674619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:22.674653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.670 [2024-11-19 10:12:22.674673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:22.674725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.670 [2024-11-19 10:12:22.674751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:29.670 7351.00 IOPS, 28.71 MiB/s [2024-11-19T10:12:43.559Z] 6918.59 IOPS, 27.03 MiB/s [2024-11-19T10:12:43.559Z] 6534.22 IOPS, 25.52 MiB/s [2024-11-19T10:12:43.559Z] 6190.32 IOPS, 24.18 MiB/s [2024-11-19T10:12:43.559Z] 6047.55 IOPS, 23.62 MiB/s [2024-11-19T10:12:43.559Z] 6196.86 IOPS, 24.21 MiB/s [2024-11-19T10:12:43.559Z] 6331.68 IOPS, 24.73 MiB/s [2024-11-19T10:12:43.559Z] 6490.83 IOPS, 25.35 MiB/s [2024-11-19T10:12:43.559Z] 6726.42 IOPS, 26.28 MiB/s [2024-11-19T10:12:43.559Z] 6948.24 IOPS, 27.14 MiB/s [2024-11-19T10:12:43.559Z] 7158.12 IOPS, 27.96 MiB/s [2024-11-19T10:12:43.559Z] 7238.19 IOPS, 28.27 MiB/s [2024-11-19T10:12:43.559Z] 7309.11 IOPS, 28.55 MiB/s [2024-11-19T10:12:43.559Z] 7375.10 IOPS, 28.81 MiB/s [2024-11-19T10:12:43.559Z] 7447.70 IOPS, 29.09 MiB/s [2024-11-19T10:12:43.559Z] 7571.10 IOPS, 29.57 MiB/s [2024-11-19T10:12:43.559Z] 7689.50 IOPS, 30.04 MiB/s [2024-11-19T10:12:43.559Z] 7750.15 IOPS, 30.27 MiB/s [2024-11-19T10:12:43.559Z] [2024-11-19 10:12:40.241818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:40.241902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:40.241976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.670 [2024-11-19 10:12:40.242027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:40.242053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.670 [2024-11-19 10:12:40.242069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:40.242090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:40.242106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:40.242127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:40.242158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:40.242179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.670 [2024-11-19 10:12:40.242194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:40.242217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.670 [2024-11-19 10:12:40.242232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:40.242253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:40.242268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:40.242290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:40.242305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:40.242326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:40.242341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:40.242363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:40.242378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:40.242399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:40.242414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:40.242436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:40.242458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:40.242480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.670 [2024-11-19 10:12:40.242506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:40.242917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.670 [2024-11-19 10:12:40.242960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:40.242999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.670 [2024-11-19 10:12:40.243019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:40.243040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.670 [2024-11-19 10:12:40.243055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:40.243077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.670 [2024-11-19 10:12:40.243092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:29.670 [2024-11-19 10:12:40.243113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.671 [2024-11-19 10:12:40.243127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:29.671 [2024-11-19 10:12:40.243149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.671 [2024-11-19 10:12:40.243164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:29.671 [2024-11-19 10:12:40.243185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.671 [2024-11-19 10:12:40.243200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:29.671 [2024-11-19 10:12:40.243222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.671 [2024-11-19 10:12:40.243237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:29.671 [2024-11-19 10:12:40.245422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.671 [2024-11-19 10:12:40.245454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:29.671 [2024-11-19 10:12:40.245482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.671 [2024-11-19 10:12:40.245500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:29.671 [2024-11-19 10:12:40.245537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.671 [2024-11-19 10:12:40.245553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:29.671 [2024-11-19 10:12:40.245574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.671 [2024-11-19 10:12:40.245589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:29.671 [2024-11-19 10:12:40.245627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.671 [2024-11-19 10:12:40.245643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:29.671 [2024-11-19 10:12:40.245665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.671 [2024-11-19 10:12:40.245680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:29.671 [2024-11-19 10:12:40.245702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.671 [2024-11-19 10:12:40.245717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:29.671 [2024-11-19 10:12:40.245738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.671 [2024-11-19 10:12:40.245752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:29.671 [2024-11-19 10:12:40.245773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.671 [2024-11-19 10:12:40.245788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:29.671 [2024-11-19 10:12:40.245809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.671 [2024-11-19 10:12:40.245824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:29.671 [2024-11-19 10:12:40.245845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.671 [2024-11-19 10:12:40.245860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.671 7787.41 IOPS, 30.42 MiB/s [2024-11-19T10:12:43.560Z] 7818.43 IOPS, 30.54 MiB/s [2024-11-19T10:12:43.560Z] 7850.36 IOPS, 30.67 MiB/s [2024-11-19T10:12:43.560Z] Received shutdown signal, test time was about 36.309082 seconds 00:18:29.671 00:18:29.671 Latency(us) 00:18:29.671 [2024-11-19T10:12:43.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.671 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:29.671 Verification LBA range: start 0x0 length 0x4000 00:18:29.671 Nvme0n1 : 36.31 7858.41 30.70 0.00 0.00 16254.51 621.85 4026531.84 00:18:29.671 [2024-11-19T10:12:43.560Z] =================================================================================================================== 00:18:29.671 [2024-11-19T10:12:43.560Z] Total : 7858.41 30.70 0.00 0.00 16254.51 621.85 4026531.84 00:18:29.671 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:30.240 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:18:30.240 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:30.240 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:18:30.240 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:30.240 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:18:30.240 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:30.240 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:18:30.241 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:30.241 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:30.241 rmmod nvme_tcp 00:18:30.241 rmmod nvme_fabrics 00:18:30.241 rmmod nvme_keyring 00:18:30.241 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:30.241 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:18:30.241 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:18:30.241 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76436 ']' 00:18:30.241 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76436 00:18:30.241 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76436 ']' 00:18:30.241 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76436 00:18:30.241 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:18:30.241 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.241 10:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76436 00:18:30.241 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:30.241 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:30.241 killing process with pid 76436 00:18:30.241 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76436' 00:18:30.241 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76436 00:18:30.241 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76436 00:18:30.500 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:30.500 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:30.500 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:30.500 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:18:30.500 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:18:30.500 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:30.500 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:18:30.500 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:30.500 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:30.500 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:30.500 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:30.500 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:30.500 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:30.500 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:30.500 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:30.500 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:30.500 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:30.500 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:30.500 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:30.500 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:30.500 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:30.759 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:30.759 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:30.759 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.759 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:30.759 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.759 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:18:30.759 00:18:30.759 real 0m42.604s 00:18:30.759 user 2m18.274s 00:18:30.759 sys 0m12.921s 00:18:30.759 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:30.759 10:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:30.759 ************************************ 00:18:30.759 END TEST nvmf_host_multipath_status 00:18:30.759 ************************************ 00:18:30.759 10:12:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:30.759 10:12:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:30.759 10:12:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:30.759 10:12:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.759 ************************************ 00:18:30.759 START TEST nvmf_discovery_remove_ifc 00:18:30.759 ************************************ 00:18:30.759 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:30.759 * Looking for test storage... 00:18:30.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:30.759 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:30.759 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:30.759 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:31.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.019 --rc genhtml_branch_coverage=1 00:18:31.019 --rc genhtml_function_coverage=1 00:18:31.019 --rc genhtml_legend=1 00:18:31.019 --rc geninfo_all_blocks=1 00:18:31.019 --rc geninfo_unexecuted_blocks=1 00:18:31.019 00:18:31.019 ' 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:31.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.019 --rc genhtml_branch_coverage=1 00:18:31.019 --rc genhtml_function_coverage=1 00:18:31.019 --rc genhtml_legend=1 00:18:31.019 --rc geninfo_all_blocks=1 00:18:31.019 --rc geninfo_unexecuted_blocks=1 00:18:31.019 00:18:31.019 ' 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:31.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.019 --rc genhtml_branch_coverage=1 00:18:31.019 --rc genhtml_function_coverage=1 00:18:31.019 --rc genhtml_legend=1 00:18:31.019 --rc geninfo_all_blocks=1 00:18:31.019 --rc geninfo_unexecuted_blocks=1 00:18:31.019 00:18:31.019 ' 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:31.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.019 --rc genhtml_branch_coverage=1 00:18:31.019 --rc genhtml_function_coverage=1 00:18:31.019 --rc genhtml_legend=1 00:18:31.019 --rc geninfo_all_blocks=1 00:18:31.019 --rc geninfo_unexecuted_blocks=1 00:18:31.019 00:18:31.019 ' 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.019 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:31.020 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:31.020 Cannot find device "nvmf_init_br" 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:31.020 Cannot find device "nvmf_init_br2" 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:31.020 Cannot find device "nvmf_tgt_br" 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:31.020 Cannot find device "nvmf_tgt_br2" 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:31.020 Cannot find device "nvmf_init_br" 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:18:31.020 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:31.021 Cannot find device "nvmf_init_br2" 00:18:31.021 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:18:31.021 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:31.021 Cannot find device "nvmf_tgt_br" 00:18:31.021 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:18:31.021 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:31.021 Cannot find device "nvmf_tgt_br2" 00:18:31.021 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:18:31.021 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:31.021 Cannot find device "nvmf_br" 00:18:31.021 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:18:31.021 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:31.021 Cannot find device "nvmf_init_if" 00:18:31.021 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:18:31.021 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:31.021 Cannot find device "nvmf_init_if2" 00:18:31.021 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:18:31.021 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:31.021 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:31.021 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:18:31.021 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:31.021 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:31.021 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:18:31.021 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:31.021 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:31.021 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:31.021 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:31.021 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:31.280 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:31.280 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:31.280 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:31.280 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:31.280 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:31.280 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:31.280 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:31.280 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:31.280 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:31.280 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:31.280 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:31.280 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:31.280 10:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:31.280 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:31.280 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:31.280 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:31.280 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:31.280 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:31.280 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:31.280 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:31.280 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:31.280 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:31.280 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:31.280 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:31.280 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:31.281 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:31.281 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:18:31.281 00:18:31.281 --- 10.0.0.3 ping statistics --- 00:18:31.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.281 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:31.281 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:31.281 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:18:31.281 00:18:31.281 --- 10.0.0.4 ping statistics --- 00:18:31.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.281 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:31.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:18:31.281 00:18:31.281 --- 10.0.0.1 ping statistics --- 00:18:31.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.281 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:31.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:18:31.281 00:18:31.281 --- 10.0.0.2 ping statistics --- 00:18:31.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.281 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77355 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77355 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77355 ']' 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.281 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:31.541 [2024-11-19 10:12:45.226162] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:18:31.541 [2024-11-19 10:12:45.226281] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.541 [2024-11-19 10:12:45.381323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.800 [2024-11-19 10:12:45.450506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.800 [2024-11-19 10:12:45.450585] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.800 [2024-11-19 10:12:45.450609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.800 [2024-11-19 10:12:45.450619] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.800 [2024-11-19 10:12:45.450629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.800 [2024-11-19 10:12:45.451109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.800 [2024-11-19 10:12:45.511633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:31.800 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.800 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:18:31.800 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:31.800 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:31.800 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:31.800 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.800 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:18:31.800 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.800 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:31.800 [2024-11-19 10:12:45.640992] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.800 [2024-11-19 10:12:45.649191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:18:31.800 null0 00:18:31.800 [2024-11-19 10:12:45.681059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:32.059 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.059 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77380 00:18:32.059 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:18:32.059 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77380 /tmp/host.sock 00:18:32.059 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77380 ']' 00:18:32.059 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:18:32.059 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.059 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:32.059 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:32.059 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.059 10:12:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:32.059 [2024-11-19 10:12:45.763788] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:18:32.059 [2024-11-19 10:12:45.763888] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77380 ] 00:18:32.059 [2024-11-19 10:12:45.915069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.319 [2024-11-19 10:12:45.984206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.319 10:12:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.319 10:12:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:18:32.320 10:12:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:32.320 10:12:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:18:32.320 10:12:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.320 10:12:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:32.320 10:12:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.320 10:12:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:18:32.320 10:12:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.320 10:12:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:32.320 [2024-11-19 10:12:46.106314] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:32.320 10:12:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.320 10:12:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:18:32.320 10:12:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.320 10:12:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:33.713 [2024-11-19 10:12:47.164364] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:33.713 [2024-11-19 10:12:47.164412] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:33.713 [2024-11-19 10:12:47.164437] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:33.713 [2024-11-19 10:12:47.170423] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:18:33.713 [2024-11-19 10:12:47.224876] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:18:33.713 [2024-11-19 10:12:47.226442] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x79cfc0:1 started. 00:18:33.713 [2024-11-19 10:12:47.228860] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:33.713 [2024-11-19 10:12:47.228976] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:33.713 [2024-11-19 10:12:47.229026] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:33.713 [2024-11-19 10:12:47.229055] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:33.713 [2024-11-19 10:12:47.229097] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:18:33.713 [2024-11-19 10:12:47.232885] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x79cfc0 was disconnected and freed. delete nvme_qpair. 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:33.713 10:12:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:34.651 10:12:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:34.651 10:12:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:34.651 10:12:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:34.651 10:12:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:34.651 10:12:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.651 10:12:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:34.651 10:12:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:34.651 10:12:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.651 10:12:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:34.651 10:12:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:35.585 10:12:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:35.585 10:12:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:35.585 10:12:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:35.585 10:12:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.585 10:12:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:35.585 10:12:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:35.585 10:12:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:35.585 10:12:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.585 10:12:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:35.585 10:12:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:36.961 10:12:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:36.961 10:12:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:36.961 10:12:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:36.961 10:12:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:36.961 10:12:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.961 10:12:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:36.961 10:12:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:36.961 10:12:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.961 10:12:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:36.961 10:12:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:37.895 10:12:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:37.895 10:12:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:37.895 10:12:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:37.895 10:12:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.895 10:12:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:37.895 10:12:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:37.895 10:12:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:37.895 10:12:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.895 10:12:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:37.895 10:12:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:38.832 10:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:38.832 10:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:38.832 10:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:38.832 10:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.832 10:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:38.832 10:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:38.832 10:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:38.832 10:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.832 10:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:38.832 10:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:38.832 [2024-11-19 10:12:52.655524] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:18:38.832 [2024-11-19 10:12:52.655604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:38.832 [2024-11-19 10:12:52.655619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.832 [2024-11-19 10:12:52.655632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:38.832 [2024-11-19 10:12:52.655640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.832 [2024-11-19 10:12:52.655650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:38.832 [2024-11-19 10:12:52.655675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.832 [2024-11-19 10:12:52.655701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:38.832 [2024-11-19 10:12:52.655710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.832 [2024-11-19 10:12:52.655721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:38.832 [2024-11-19 10:12:52.655730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.832 [2024-11-19 10:12:52.655740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x779240 is same with the state(6) to be set 00:18:38.832 [2024-11-19 10:12:52.665520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x779240 (9): Bad file descriptor 00:18:38.832 [2024-11-19 10:12:52.675543] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:38.832 [2024-11-19 10:12:52.675579] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:38.832 [2024-11-19 10:12:52.675586] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:38.832 [2024-11-19 10:12:52.675592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:38.832 [2024-11-19 10:12:52.675633] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:40.208 10:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:40.208 10:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:40.208 10:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:40.208 10:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.208 10:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:40.208 10:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:40.208 10:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:40.208 [2024-11-19 10:12:53.732990] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:18:40.208 [2024-11-19 10:12:53.733134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x779240 with addr=10.0.0.3, port=4420 00:18:40.208 [2024-11-19 10:12:53.733166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x779240 is same with the state(6) to be set 00:18:40.208 [2024-11-19 10:12:53.733233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x779240 (9): Bad file descriptor 00:18:40.208 [2024-11-19 10:12:53.733797] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:18:40.208 [2024-11-19 10:12:53.733868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:40.208 [2024-11-19 10:12:53.733889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:40.208 [2024-11-19 10:12:53.733908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:40.208 [2024-11-19 10:12:53.733953] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:40.208 [2024-11-19 10:12:53.733968] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:40.208 [2024-11-19 10:12:53.733977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:40.208 [2024-11-19 10:12:53.733995] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:40.208 [2024-11-19 10:12:53.734005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:40.208 10:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.208 10:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:40.208 10:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:41.143 [2024-11-19 10:12:54.734063] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:41.143 [2024-11-19 10:12:54.734138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:41.143 [2024-11-19 10:12:54.734172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:41.143 [2024-11-19 10:12:54.734199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:41.143 [2024-11-19 10:12:54.734210] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:18:41.143 [2024-11-19 10:12:54.734220] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:41.143 [2024-11-19 10:12:54.734227] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:41.143 [2024-11-19 10:12:54.734232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:41.143 [2024-11-19 10:12:54.734270] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:18:41.143 [2024-11-19 10:12:54.734327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.143 [2024-11-19 10:12:54.734343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.143 [2024-11-19 10:12:54.734356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.143 [2024-11-19 10:12:54.734365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.143 [2024-11-19 10:12:54.734374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.143 [2024-11-19 10:12:54.734383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.143 [2024-11-19 10:12:54.734393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.143 [2024-11-19 10:12:54.734402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.143 [2024-11-19 10:12:54.734412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.143 [2024-11-19 10:12:54.734420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.143 [2024-11-19 10:12:54.734430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:18:41.143 [2024-11-19 10:12:54.734505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x704a20 (9): Bad file descriptor 00:18:41.143 [2024-11-19 10:12:54.735499] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:18:41.143 [2024-11-19 10:12:54.735522] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:18:41.143 10:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:41.143 10:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:41.143 10:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.143 10:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:41.143 10:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:41.143 10:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:41.143 10:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:41.143 10:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.143 10:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:18:41.143 10:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:41.143 10:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:41.143 10:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:18:41.143 10:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:41.143 10:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:41.143 10:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:41.143 10:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:41.143 10:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:41.143 10:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.143 10:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:41.143 10:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.143 10:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:41.143 10:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:42.077 10:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:42.077 10:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:42.077 10:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:42.077 10:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.077 10:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:42.077 10:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:42.077 10:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:42.077 10:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.077 10:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:42.077 10:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:43.011 [2024-11-19 10:12:56.739255] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:43.011 [2024-11-19 10:12:56.739297] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:43.011 [2024-11-19 10:12:56.739317] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:43.011 [2024-11-19 10:12:56.745292] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:18:43.011 [2024-11-19 10:12:56.799634] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:18:43.011 [2024-11-19 10:12:56.800528] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x755f00:1 started. 00:18:43.011 [2024-11-19 10:12:56.801904] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:43.011 [2024-11-19 10:12:56.801961] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:43.011 [2024-11-19 10:12:56.801987] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:43.012 [2024-11-19 10:12:56.802004] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:18:43.012 [2024-11-19 10:12:56.802014] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:43.012 [2024-11-19 10:12:56.807790] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x755f00 was disconnected and freed. delete nvme_qpair. 00:18:43.270 10:12:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:43.270 10:12:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:43.270 10:12:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:43.270 10:12:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.270 10:12:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:43.270 10:12:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:43.270 10:12:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:43.270 10:12:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.270 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:18:43.270 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:18:43.270 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77380 00:18:43.270 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77380 ']' 00:18:43.270 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77380 00:18:43.270 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:18:43.270 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.270 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77380 00:18:43.270 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:43.270 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:43.270 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77380' 00:18:43.270 killing process with pid 77380 00:18:43.270 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77380 00:18:43.270 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77380 00:18:43.528 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:18:43.528 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:43.528 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:18:43.528 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:43.528 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:18:43.528 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:43.528 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:43.528 rmmod nvme_tcp 00:18:43.528 rmmod nvme_fabrics 00:18:43.528 rmmod nvme_keyring 00:18:43.528 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:43.528 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:18:43.528 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:18:43.528 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77355 ']' 00:18:43.528 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77355 00:18:43.528 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77355 ']' 00:18:43.528 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77355 00:18:43.528 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:18:43.528 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.528 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77355 00:18:43.528 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:43.528 killing process with pid 77355 00:18:43.528 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:43.528 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77355' 00:18:43.528 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77355 00:18:43.528 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77355 00:18:43.787 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:43.787 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:43.787 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:43.787 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:18:43.787 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:18:43.787 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:43.787 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:18:43.787 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:43.787 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:43.787 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:43.787 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:43.787 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:43.787 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:43.787 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:43.787 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:43.787 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:43.787 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:43.787 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:44.046 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:44.046 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:44.046 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:44.046 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:44.046 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:44.046 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.046 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.046 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.046 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:18:44.046 00:18:44.046 real 0m13.299s 00:18:44.046 user 0m22.516s 00:18:44.046 sys 0m2.472s 00:18:44.046 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.046 ************************************ 00:18:44.046 END TEST nvmf_discovery_remove_ifc 00:18:44.046 ************************************ 00:18:44.046 10:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:44.046 10:12:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:44.046 10:12:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:44.046 10:12:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.046 10:12:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.046 ************************************ 00:18:44.046 START TEST nvmf_identify_kernel_target 00:18:44.046 ************************************ 00:18:44.046 10:12:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:44.306 * Looking for test storage... 00:18:44.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:44.306 10:12:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:44.306 10:12:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:18:44.306 10:12:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:44.306 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:44.306 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:44.306 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:44.306 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:44.306 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:44.306 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:44.306 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:44.306 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:44.306 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:44.306 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:44.306 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:44.306 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:44.306 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:18:44.306 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:18:44.306 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:44.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.307 --rc genhtml_branch_coverage=1 00:18:44.307 --rc genhtml_function_coverage=1 00:18:44.307 --rc genhtml_legend=1 00:18:44.307 --rc geninfo_all_blocks=1 00:18:44.307 --rc geninfo_unexecuted_blocks=1 00:18:44.307 00:18:44.307 ' 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:44.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.307 --rc genhtml_branch_coverage=1 00:18:44.307 --rc genhtml_function_coverage=1 00:18:44.307 --rc genhtml_legend=1 00:18:44.307 --rc geninfo_all_blocks=1 00:18:44.307 --rc geninfo_unexecuted_blocks=1 00:18:44.307 00:18:44.307 ' 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:44.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.307 --rc genhtml_branch_coverage=1 00:18:44.307 --rc genhtml_function_coverage=1 00:18:44.307 --rc genhtml_legend=1 00:18:44.307 --rc geninfo_all_blocks=1 00:18:44.307 --rc geninfo_unexecuted_blocks=1 00:18:44.307 00:18:44.307 ' 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:44.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.307 --rc genhtml_branch_coverage=1 00:18:44.307 --rc genhtml_function_coverage=1 00:18:44.307 --rc genhtml_legend=1 00:18:44.307 --rc geninfo_all_blocks=1 00:18:44.307 --rc geninfo_unexecuted_blocks=1 00:18:44.307 00:18:44.307 ' 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:44.307 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:44.307 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:44.308 Cannot find device "nvmf_init_br" 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:44.308 Cannot find device "nvmf_init_br2" 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:44.308 Cannot find device "nvmf_tgt_br" 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:44.308 Cannot find device "nvmf_tgt_br2" 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:44.308 Cannot find device "nvmf_init_br" 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:44.308 Cannot find device "nvmf_init_br2" 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:44.308 Cannot find device "nvmf_tgt_br" 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:44.308 Cannot find device "nvmf_tgt_br2" 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:18:44.308 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:44.567 Cannot find device "nvmf_br" 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:44.567 Cannot find device "nvmf_init_if" 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:44.567 Cannot find device "nvmf_init_if2" 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:44.567 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:44.567 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:44.567 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:44.826 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:44.826 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:18:44.826 00:18:44.826 --- 10.0.0.3 ping statistics --- 00:18:44.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.826 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:44.826 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:44.826 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:18:44.826 00:18:44.826 --- 10.0.0.4 ping statistics --- 00:18:44.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.826 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:44.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:44.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:18:44.826 00:18:44.826 --- 10.0.0.1 ping statistics --- 00:18:44.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.826 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:44.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:44.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:18:44.826 00:18:44.826 --- 10.0.0.2 ping statistics --- 00:18:44.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.826 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:44.826 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:45.084 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:45.084 Waiting for block devices as requested 00:18:45.084 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:45.343 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:45.343 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:45.343 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:45.343 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:18:45.343 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:45.343 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:45.343 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:45.343 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:18:45.343 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:18:45.343 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:45.343 No valid GPT data, bailing 00:18:45.601 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:45.601 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:45.601 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:45.601 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:18:45.601 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:45.601 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:45.601 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:18:45.601 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:18:45.601 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:45.601 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:45.601 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:18:45.601 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:18:45.601 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:45.601 No valid GPT data, bailing 00:18:45.601 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:45.601 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:45.601 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:45.601 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:45.602 No valid GPT data, bailing 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:45.602 No valid GPT data, bailing 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:18:45.602 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:45.860 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid=6147973c-080a-4377-b1e7-85172bdc559a -a 10.0.0.1 -t tcp -s 4420 00:18:45.860 00:18:45.860 Discovery Log Number of Records 2, Generation counter 2 00:18:45.860 =====Discovery Log Entry 0====== 00:18:45.860 trtype: tcp 00:18:45.860 adrfam: ipv4 00:18:45.860 subtype: current discovery subsystem 00:18:45.860 treq: not specified, sq flow control disable supported 00:18:45.860 portid: 1 00:18:45.860 trsvcid: 4420 00:18:45.860 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:45.860 traddr: 10.0.0.1 00:18:45.860 eflags: none 00:18:45.860 sectype: none 00:18:45.860 =====Discovery Log Entry 1====== 00:18:45.860 trtype: tcp 00:18:45.860 adrfam: ipv4 00:18:45.860 subtype: nvme subsystem 00:18:45.860 treq: not specified, sq flow control disable supported 00:18:45.860 portid: 1 00:18:45.860 trsvcid: 4420 00:18:45.860 subnqn: nqn.2016-06.io.spdk:testnqn 00:18:45.860 traddr: 10.0.0.1 00:18:45.860 eflags: none 00:18:45.860 sectype: none 00:18:45.860 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:18:45.860 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:18:45.860 ===================================================== 00:18:45.860 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:45.860 ===================================================== 00:18:45.860 Controller Capabilities/Features 00:18:45.860 ================================ 00:18:45.860 Vendor ID: 0000 00:18:45.860 Subsystem Vendor ID: 0000 00:18:45.860 Serial Number: 4ee1137b76db3275a9d7 00:18:45.860 Model Number: Linux 00:18:45.860 Firmware Version: 6.8.9-20 00:18:45.860 Recommended Arb Burst: 0 00:18:45.860 IEEE OUI Identifier: 00 00 00 00:18:45.860 Multi-path I/O 00:18:45.860 May have multiple subsystem ports: No 00:18:45.860 May have multiple controllers: No 00:18:45.860 Associated with SR-IOV VF: No 00:18:45.860 Max Data Transfer Size: Unlimited 00:18:45.860 Max Number of Namespaces: 0 00:18:45.860 Max Number of I/O Queues: 1024 00:18:45.860 NVMe Specification Version (VS): 1.3 00:18:45.860 NVMe Specification Version (Identify): 1.3 00:18:45.860 Maximum Queue Entries: 1024 00:18:45.860 Contiguous Queues Required: No 00:18:45.860 Arbitration Mechanisms Supported 00:18:45.860 Weighted Round Robin: Not Supported 00:18:45.860 Vendor Specific: Not Supported 00:18:45.860 Reset Timeout: 7500 ms 00:18:45.860 Doorbell Stride: 4 bytes 00:18:45.860 NVM Subsystem Reset: Not Supported 00:18:45.861 Command Sets Supported 00:18:45.861 NVM Command Set: Supported 00:18:45.861 Boot Partition: Not Supported 00:18:45.861 Memory Page Size Minimum: 4096 bytes 00:18:45.861 Memory Page Size Maximum: 4096 bytes 00:18:45.861 Persistent Memory Region: Not Supported 00:18:45.861 Optional Asynchronous Events Supported 00:18:45.861 Namespace Attribute Notices: Not Supported 00:18:45.861 Firmware Activation Notices: Not Supported 00:18:45.861 ANA Change Notices: Not Supported 00:18:45.861 PLE Aggregate Log Change Notices: Not Supported 00:18:45.861 LBA Status Info Alert Notices: Not Supported 00:18:45.861 EGE Aggregate Log Change Notices: Not Supported 00:18:45.861 Normal NVM Subsystem Shutdown event: Not Supported 00:18:45.861 Zone Descriptor Change Notices: Not Supported 00:18:45.861 Discovery Log Change Notices: Supported 00:18:45.861 Controller Attributes 00:18:45.861 128-bit Host Identifier: Not Supported 00:18:45.861 Non-Operational Permissive Mode: Not Supported 00:18:45.861 NVM Sets: Not Supported 00:18:45.861 Read Recovery Levels: Not Supported 00:18:45.861 Endurance Groups: Not Supported 00:18:45.861 Predictable Latency Mode: Not Supported 00:18:45.861 Traffic Based Keep ALive: Not Supported 00:18:45.861 Namespace Granularity: Not Supported 00:18:45.861 SQ Associations: Not Supported 00:18:45.861 UUID List: Not Supported 00:18:45.861 Multi-Domain Subsystem: Not Supported 00:18:45.861 Fixed Capacity Management: Not Supported 00:18:45.861 Variable Capacity Management: Not Supported 00:18:45.861 Delete Endurance Group: Not Supported 00:18:45.861 Delete NVM Set: Not Supported 00:18:45.861 Extended LBA Formats Supported: Not Supported 00:18:45.861 Flexible Data Placement Supported: Not Supported 00:18:45.861 00:18:45.861 Controller Memory Buffer Support 00:18:45.861 ================================ 00:18:45.861 Supported: No 00:18:45.861 00:18:45.861 Persistent Memory Region Support 00:18:45.861 ================================ 00:18:45.861 Supported: No 00:18:45.861 00:18:45.861 Admin Command Set Attributes 00:18:45.861 ============================ 00:18:45.861 Security Send/Receive: Not Supported 00:18:45.861 Format NVM: Not Supported 00:18:45.861 Firmware Activate/Download: Not Supported 00:18:45.861 Namespace Management: Not Supported 00:18:45.861 Device Self-Test: Not Supported 00:18:45.861 Directives: Not Supported 00:18:45.861 NVMe-MI: Not Supported 00:18:45.861 Virtualization Management: Not Supported 00:18:45.861 Doorbell Buffer Config: Not Supported 00:18:45.861 Get LBA Status Capability: Not Supported 00:18:45.861 Command & Feature Lockdown Capability: Not Supported 00:18:45.861 Abort Command Limit: 1 00:18:45.861 Async Event Request Limit: 1 00:18:45.861 Number of Firmware Slots: N/A 00:18:45.861 Firmware Slot 1 Read-Only: N/A 00:18:45.861 Firmware Activation Without Reset: N/A 00:18:45.861 Multiple Update Detection Support: N/A 00:18:45.861 Firmware Update Granularity: No Information Provided 00:18:45.861 Per-Namespace SMART Log: No 00:18:45.861 Asymmetric Namespace Access Log Page: Not Supported 00:18:45.861 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:45.861 Command Effects Log Page: Not Supported 00:18:45.861 Get Log Page Extended Data: Supported 00:18:45.861 Telemetry Log Pages: Not Supported 00:18:45.861 Persistent Event Log Pages: Not Supported 00:18:45.861 Supported Log Pages Log Page: May Support 00:18:45.861 Commands Supported & Effects Log Page: Not Supported 00:18:45.861 Feature Identifiers & Effects Log Page:May Support 00:18:45.861 NVMe-MI Commands & Effects Log Page: May Support 00:18:45.861 Data Area 4 for Telemetry Log: Not Supported 00:18:45.861 Error Log Page Entries Supported: 1 00:18:45.861 Keep Alive: Not Supported 00:18:45.861 00:18:45.861 NVM Command Set Attributes 00:18:45.861 ========================== 00:18:45.861 Submission Queue Entry Size 00:18:45.861 Max: 1 00:18:45.861 Min: 1 00:18:45.861 Completion Queue Entry Size 00:18:45.861 Max: 1 00:18:45.861 Min: 1 00:18:45.861 Number of Namespaces: 0 00:18:45.861 Compare Command: Not Supported 00:18:45.861 Write Uncorrectable Command: Not Supported 00:18:45.861 Dataset Management Command: Not Supported 00:18:45.861 Write Zeroes Command: Not Supported 00:18:45.861 Set Features Save Field: Not Supported 00:18:45.861 Reservations: Not Supported 00:18:45.861 Timestamp: Not Supported 00:18:45.861 Copy: Not Supported 00:18:45.861 Volatile Write Cache: Not Present 00:18:45.861 Atomic Write Unit (Normal): 1 00:18:45.861 Atomic Write Unit (PFail): 1 00:18:45.861 Atomic Compare & Write Unit: 1 00:18:45.861 Fused Compare & Write: Not Supported 00:18:45.861 Scatter-Gather List 00:18:45.861 SGL Command Set: Supported 00:18:45.861 SGL Keyed: Not Supported 00:18:45.861 SGL Bit Bucket Descriptor: Not Supported 00:18:45.861 SGL Metadata Pointer: Not Supported 00:18:45.861 Oversized SGL: Not Supported 00:18:45.861 SGL Metadata Address: Not Supported 00:18:45.861 SGL Offset: Supported 00:18:45.861 Transport SGL Data Block: Not Supported 00:18:45.861 Replay Protected Memory Block: Not Supported 00:18:45.861 00:18:45.861 Firmware Slot Information 00:18:45.861 ========================= 00:18:45.861 Active slot: 0 00:18:45.861 00:18:45.861 00:18:45.861 Error Log 00:18:45.861 ========= 00:18:45.861 00:18:45.861 Active Namespaces 00:18:45.861 ================= 00:18:45.861 Discovery Log Page 00:18:45.861 ================== 00:18:45.861 Generation Counter: 2 00:18:45.861 Number of Records: 2 00:18:45.861 Record Format: 0 00:18:45.861 00:18:45.861 Discovery Log Entry 0 00:18:45.861 ---------------------- 00:18:45.861 Transport Type: 3 (TCP) 00:18:45.861 Address Family: 1 (IPv4) 00:18:45.861 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:45.861 Entry Flags: 00:18:45.861 Duplicate Returned Information: 0 00:18:45.861 Explicit Persistent Connection Support for Discovery: 0 00:18:45.861 Transport Requirements: 00:18:45.861 Secure Channel: Not Specified 00:18:45.861 Port ID: 1 (0x0001) 00:18:45.861 Controller ID: 65535 (0xffff) 00:18:45.861 Admin Max SQ Size: 32 00:18:45.861 Transport Service Identifier: 4420 00:18:45.861 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:45.861 Transport Address: 10.0.0.1 00:18:45.861 Discovery Log Entry 1 00:18:45.861 ---------------------- 00:18:45.861 Transport Type: 3 (TCP) 00:18:45.861 Address Family: 1 (IPv4) 00:18:45.861 Subsystem Type: 2 (NVM Subsystem) 00:18:45.861 Entry Flags: 00:18:45.861 Duplicate Returned Information: 0 00:18:45.861 Explicit Persistent Connection Support for Discovery: 0 00:18:45.861 Transport Requirements: 00:18:45.861 Secure Channel: Not Specified 00:18:45.861 Port ID: 1 (0x0001) 00:18:45.861 Controller ID: 65535 (0xffff) 00:18:45.861 Admin Max SQ Size: 32 00:18:45.861 Transport Service Identifier: 4420 00:18:45.861 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:18:45.861 Transport Address: 10.0.0.1 00:18:45.861 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:18:46.121 get_feature(0x01) failed 00:18:46.121 get_feature(0x02) failed 00:18:46.121 get_feature(0x04) failed 00:18:46.121 ===================================================== 00:18:46.121 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:18:46.121 ===================================================== 00:18:46.121 Controller Capabilities/Features 00:18:46.121 ================================ 00:18:46.121 Vendor ID: 0000 00:18:46.121 Subsystem Vendor ID: 0000 00:18:46.121 Serial Number: 44d10ae7f73d50461329 00:18:46.121 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:18:46.121 Firmware Version: 6.8.9-20 00:18:46.121 Recommended Arb Burst: 6 00:18:46.121 IEEE OUI Identifier: 00 00 00 00:18:46.121 Multi-path I/O 00:18:46.121 May have multiple subsystem ports: Yes 00:18:46.121 May have multiple controllers: Yes 00:18:46.121 Associated with SR-IOV VF: No 00:18:46.121 Max Data Transfer Size: Unlimited 00:18:46.121 Max Number of Namespaces: 1024 00:18:46.121 Max Number of I/O Queues: 128 00:18:46.121 NVMe Specification Version (VS): 1.3 00:18:46.121 NVMe Specification Version (Identify): 1.3 00:18:46.121 Maximum Queue Entries: 1024 00:18:46.121 Contiguous Queues Required: No 00:18:46.121 Arbitration Mechanisms Supported 00:18:46.121 Weighted Round Robin: Not Supported 00:18:46.121 Vendor Specific: Not Supported 00:18:46.121 Reset Timeout: 7500 ms 00:18:46.121 Doorbell Stride: 4 bytes 00:18:46.121 NVM Subsystem Reset: Not Supported 00:18:46.121 Command Sets Supported 00:18:46.121 NVM Command Set: Supported 00:18:46.121 Boot Partition: Not Supported 00:18:46.121 Memory Page Size Minimum: 4096 bytes 00:18:46.121 Memory Page Size Maximum: 4096 bytes 00:18:46.121 Persistent Memory Region: Not Supported 00:18:46.121 Optional Asynchronous Events Supported 00:18:46.121 Namespace Attribute Notices: Supported 00:18:46.121 Firmware Activation Notices: Not Supported 00:18:46.121 ANA Change Notices: Supported 00:18:46.121 PLE Aggregate Log Change Notices: Not Supported 00:18:46.121 LBA Status Info Alert Notices: Not Supported 00:18:46.121 EGE Aggregate Log Change Notices: Not Supported 00:18:46.121 Normal NVM Subsystem Shutdown event: Not Supported 00:18:46.121 Zone Descriptor Change Notices: Not Supported 00:18:46.121 Discovery Log Change Notices: Not Supported 00:18:46.121 Controller Attributes 00:18:46.121 128-bit Host Identifier: Supported 00:18:46.121 Non-Operational Permissive Mode: Not Supported 00:18:46.121 NVM Sets: Not Supported 00:18:46.121 Read Recovery Levels: Not Supported 00:18:46.121 Endurance Groups: Not Supported 00:18:46.121 Predictable Latency Mode: Not Supported 00:18:46.121 Traffic Based Keep ALive: Supported 00:18:46.121 Namespace Granularity: Not Supported 00:18:46.121 SQ Associations: Not Supported 00:18:46.121 UUID List: Not Supported 00:18:46.121 Multi-Domain Subsystem: Not Supported 00:18:46.121 Fixed Capacity Management: Not Supported 00:18:46.121 Variable Capacity Management: Not Supported 00:18:46.121 Delete Endurance Group: Not Supported 00:18:46.121 Delete NVM Set: Not Supported 00:18:46.121 Extended LBA Formats Supported: Not Supported 00:18:46.121 Flexible Data Placement Supported: Not Supported 00:18:46.121 00:18:46.121 Controller Memory Buffer Support 00:18:46.121 ================================ 00:18:46.121 Supported: No 00:18:46.121 00:18:46.121 Persistent Memory Region Support 00:18:46.121 ================================ 00:18:46.121 Supported: No 00:18:46.121 00:18:46.121 Admin Command Set Attributes 00:18:46.121 ============================ 00:18:46.121 Security Send/Receive: Not Supported 00:18:46.121 Format NVM: Not Supported 00:18:46.121 Firmware Activate/Download: Not Supported 00:18:46.121 Namespace Management: Not Supported 00:18:46.121 Device Self-Test: Not Supported 00:18:46.121 Directives: Not Supported 00:18:46.121 NVMe-MI: Not Supported 00:18:46.121 Virtualization Management: Not Supported 00:18:46.121 Doorbell Buffer Config: Not Supported 00:18:46.121 Get LBA Status Capability: Not Supported 00:18:46.121 Command & Feature Lockdown Capability: Not Supported 00:18:46.121 Abort Command Limit: 4 00:18:46.121 Async Event Request Limit: 4 00:18:46.121 Number of Firmware Slots: N/A 00:18:46.121 Firmware Slot 1 Read-Only: N/A 00:18:46.121 Firmware Activation Without Reset: N/A 00:18:46.121 Multiple Update Detection Support: N/A 00:18:46.121 Firmware Update Granularity: No Information Provided 00:18:46.121 Per-Namespace SMART Log: Yes 00:18:46.121 Asymmetric Namespace Access Log Page: Supported 00:18:46.121 ANA Transition Time : 10 sec 00:18:46.121 00:18:46.121 Asymmetric Namespace Access Capabilities 00:18:46.121 ANA Optimized State : Supported 00:18:46.121 ANA Non-Optimized State : Supported 00:18:46.121 ANA Inaccessible State : Supported 00:18:46.121 ANA Persistent Loss State : Supported 00:18:46.121 ANA Change State : Supported 00:18:46.121 ANAGRPID is not changed : No 00:18:46.121 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:18:46.121 00:18:46.121 ANA Group Identifier Maximum : 128 00:18:46.121 Number of ANA Group Identifiers : 128 00:18:46.121 Max Number of Allowed Namespaces : 1024 00:18:46.121 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:18:46.121 Command Effects Log Page: Supported 00:18:46.121 Get Log Page Extended Data: Supported 00:18:46.121 Telemetry Log Pages: Not Supported 00:18:46.121 Persistent Event Log Pages: Not Supported 00:18:46.121 Supported Log Pages Log Page: May Support 00:18:46.121 Commands Supported & Effects Log Page: Not Supported 00:18:46.121 Feature Identifiers & Effects Log Page:May Support 00:18:46.121 NVMe-MI Commands & Effects Log Page: May Support 00:18:46.122 Data Area 4 for Telemetry Log: Not Supported 00:18:46.122 Error Log Page Entries Supported: 128 00:18:46.122 Keep Alive: Supported 00:18:46.122 Keep Alive Granularity: 1000 ms 00:18:46.122 00:18:46.122 NVM Command Set Attributes 00:18:46.122 ========================== 00:18:46.122 Submission Queue Entry Size 00:18:46.122 Max: 64 00:18:46.122 Min: 64 00:18:46.122 Completion Queue Entry Size 00:18:46.122 Max: 16 00:18:46.122 Min: 16 00:18:46.122 Number of Namespaces: 1024 00:18:46.122 Compare Command: Not Supported 00:18:46.122 Write Uncorrectable Command: Not Supported 00:18:46.122 Dataset Management Command: Supported 00:18:46.122 Write Zeroes Command: Supported 00:18:46.122 Set Features Save Field: Not Supported 00:18:46.122 Reservations: Not Supported 00:18:46.122 Timestamp: Not Supported 00:18:46.122 Copy: Not Supported 00:18:46.122 Volatile Write Cache: Present 00:18:46.122 Atomic Write Unit (Normal): 1 00:18:46.122 Atomic Write Unit (PFail): 1 00:18:46.122 Atomic Compare & Write Unit: 1 00:18:46.122 Fused Compare & Write: Not Supported 00:18:46.122 Scatter-Gather List 00:18:46.122 SGL Command Set: Supported 00:18:46.122 SGL Keyed: Not Supported 00:18:46.122 SGL Bit Bucket Descriptor: Not Supported 00:18:46.122 SGL Metadata Pointer: Not Supported 00:18:46.122 Oversized SGL: Not Supported 00:18:46.122 SGL Metadata Address: Not Supported 00:18:46.122 SGL Offset: Supported 00:18:46.122 Transport SGL Data Block: Not Supported 00:18:46.122 Replay Protected Memory Block: Not Supported 00:18:46.122 00:18:46.122 Firmware Slot Information 00:18:46.122 ========================= 00:18:46.122 Active slot: 0 00:18:46.122 00:18:46.122 Asymmetric Namespace Access 00:18:46.122 =========================== 00:18:46.122 Change Count : 0 00:18:46.122 Number of ANA Group Descriptors : 1 00:18:46.122 ANA Group Descriptor : 0 00:18:46.122 ANA Group ID : 1 00:18:46.122 Number of NSID Values : 1 00:18:46.122 Change Count : 0 00:18:46.122 ANA State : 1 00:18:46.122 Namespace Identifier : 1 00:18:46.122 00:18:46.122 Commands Supported and Effects 00:18:46.122 ============================== 00:18:46.122 Admin Commands 00:18:46.122 -------------- 00:18:46.122 Get Log Page (02h): Supported 00:18:46.122 Identify (06h): Supported 00:18:46.122 Abort (08h): Supported 00:18:46.122 Set Features (09h): Supported 00:18:46.122 Get Features (0Ah): Supported 00:18:46.122 Asynchronous Event Request (0Ch): Supported 00:18:46.122 Keep Alive (18h): Supported 00:18:46.122 I/O Commands 00:18:46.122 ------------ 00:18:46.122 Flush (00h): Supported 00:18:46.122 Write (01h): Supported LBA-Change 00:18:46.122 Read (02h): Supported 00:18:46.122 Write Zeroes (08h): Supported LBA-Change 00:18:46.122 Dataset Management (09h): Supported 00:18:46.122 00:18:46.122 Error Log 00:18:46.122 ========= 00:18:46.122 Entry: 0 00:18:46.122 Error Count: 0x3 00:18:46.122 Submission Queue Id: 0x0 00:18:46.122 Command Id: 0x5 00:18:46.122 Phase Bit: 0 00:18:46.122 Status Code: 0x2 00:18:46.122 Status Code Type: 0x0 00:18:46.122 Do Not Retry: 1 00:18:46.122 Error Location: 0x28 00:18:46.122 LBA: 0x0 00:18:46.122 Namespace: 0x0 00:18:46.122 Vendor Log Page: 0x0 00:18:46.122 ----------- 00:18:46.122 Entry: 1 00:18:46.122 Error Count: 0x2 00:18:46.122 Submission Queue Id: 0x0 00:18:46.122 Command Id: 0x5 00:18:46.122 Phase Bit: 0 00:18:46.122 Status Code: 0x2 00:18:46.122 Status Code Type: 0x0 00:18:46.122 Do Not Retry: 1 00:18:46.122 Error Location: 0x28 00:18:46.122 LBA: 0x0 00:18:46.122 Namespace: 0x0 00:18:46.122 Vendor Log Page: 0x0 00:18:46.122 ----------- 00:18:46.122 Entry: 2 00:18:46.122 Error Count: 0x1 00:18:46.122 Submission Queue Id: 0x0 00:18:46.122 Command Id: 0x4 00:18:46.122 Phase Bit: 0 00:18:46.122 Status Code: 0x2 00:18:46.122 Status Code Type: 0x0 00:18:46.122 Do Not Retry: 1 00:18:46.122 Error Location: 0x28 00:18:46.122 LBA: 0x0 00:18:46.122 Namespace: 0x0 00:18:46.122 Vendor Log Page: 0x0 00:18:46.122 00:18:46.122 Number of Queues 00:18:46.122 ================ 00:18:46.122 Number of I/O Submission Queues: 128 00:18:46.122 Number of I/O Completion Queues: 128 00:18:46.122 00:18:46.122 ZNS Specific Controller Data 00:18:46.122 ============================ 00:18:46.122 Zone Append Size Limit: 0 00:18:46.122 00:18:46.122 00:18:46.122 Active Namespaces 00:18:46.122 ================= 00:18:46.122 get_feature(0x05) failed 00:18:46.122 Namespace ID:1 00:18:46.122 Command Set Identifier: NVM (00h) 00:18:46.122 Deallocate: Supported 00:18:46.122 Deallocated/Unwritten Error: Not Supported 00:18:46.122 Deallocated Read Value: Unknown 00:18:46.122 Deallocate in Write Zeroes: Not Supported 00:18:46.122 Deallocated Guard Field: 0xFFFF 00:18:46.122 Flush: Supported 00:18:46.122 Reservation: Not Supported 00:18:46.122 Namespace Sharing Capabilities: Multiple Controllers 00:18:46.122 Size (in LBAs): 1310720 (5GiB) 00:18:46.122 Capacity (in LBAs): 1310720 (5GiB) 00:18:46.122 Utilization (in LBAs): 1310720 (5GiB) 00:18:46.122 UUID: 75e8b917-9db8-445a-aba0-365ef7807f13 00:18:46.122 Thin Provisioning: Not Supported 00:18:46.122 Per-NS Atomic Units: Yes 00:18:46.122 Atomic Boundary Size (Normal): 0 00:18:46.122 Atomic Boundary Size (PFail): 0 00:18:46.122 Atomic Boundary Offset: 0 00:18:46.122 NGUID/EUI64 Never Reused: No 00:18:46.122 ANA group ID: 1 00:18:46.122 Namespace Write Protected: No 00:18:46.122 Number of LBA Formats: 1 00:18:46.122 Current LBA Format: LBA Format #00 00:18:46.122 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:18:46.122 00:18:46.122 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:18:46.122 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:46.122 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:18:46.122 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:46.122 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:18:46.122 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:46.122 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:46.122 rmmod nvme_tcp 00:18:46.122 rmmod nvme_fabrics 00:18:46.122 10:12:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:46.122 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:18:46.122 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:18:46.122 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:18:46.122 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:46.122 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:46.122 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:46.122 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:18:46.122 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:18:46.381 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:46.381 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:46.381 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:46.381 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:46.381 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:46.381 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:46.381 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:46.381 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:46.381 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:46.381 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:46.381 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:46.381 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:46.381 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:46.381 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:46.381 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:46.381 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:46.381 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:46.381 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:46.382 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.382 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.382 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.639 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:18:46.639 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:18:46.639 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:18:46.639 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:18:46.639 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:46.639 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:46.639 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:46.639 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:46.639 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:18:46.639 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:18:46.639 10:13:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:47.205 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:47.464 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:47.464 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:47.464 00:18:47.464 real 0m3.372s 00:18:47.464 user 0m1.180s 00:18:47.464 sys 0m1.508s 00:18:47.464 10:13:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:47.464 10:13:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.464 ************************************ 00:18:47.464 END TEST nvmf_identify_kernel_target 00:18:47.464 ************************************ 00:18:47.464 10:13:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:47.464 10:13:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:47.464 10:13:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:47.464 10:13:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.464 ************************************ 00:18:47.464 START TEST nvmf_auth_host 00:18:47.464 ************************************ 00:18:47.464 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:47.724 * Looking for test storage... 00:18:47.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:47.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.724 --rc genhtml_branch_coverage=1 00:18:47.724 --rc genhtml_function_coverage=1 00:18:47.724 --rc genhtml_legend=1 00:18:47.724 --rc geninfo_all_blocks=1 00:18:47.724 --rc geninfo_unexecuted_blocks=1 00:18:47.724 00:18:47.724 ' 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:47.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.724 --rc genhtml_branch_coverage=1 00:18:47.724 --rc genhtml_function_coverage=1 00:18:47.724 --rc genhtml_legend=1 00:18:47.724 --rc geninfo_all_blocks=1 00:18:47.724 --rc geninfo_unexecuted_blocks=1 00:18:47.724 00:18:47.724 ' 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:47.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.724 --rc genhtml_branch_coverage=1 00:18:47.724 --rc genhtml_function_coverage=1 00:18:47.724 --rc genhtml_legend=1 00:18:47.724 --rc geninfo_all_blocks=1 00:18:47.724 --rc geninfo_unexecuted_blocks=1 00:18:47.724 00:18:47.724 ' 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:47.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.724 --rc genhtml_branch_coverage=1 00:18:47.724 --rc genhtml_function_coverage=1 00:18:47.724 --rc genhtml_legend=1 00:18:47.724 --rc geninfo_all_blocks=1 00:18:47.724 --rc geninfo_unexecuted_blocks=1 00:18:47.724 00:18:47.724 ' 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.724 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:47.725 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:47.725 Cannot find device "nvmf_init_br" 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:47.725 Cannot find device "nvmf_init_br2" 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:47.725 Cannot find device "nvmf_tgt_br" 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:47.725 Cannot find device "nvmf_tgt_br2" 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:47.725 Cannot find device "nvmf_init_br" 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:47.725 Cannot find device "nvmf_init_br2" 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:47.725 Cannot find device "nvmf_tgt_br" 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:47.725 Cannot find device "nvmf_tgt_br2" 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:18:47.725 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:47.997 Cannot find device "nvmf_br" 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:47.997 Cannot find device "nvmf_init_if" 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:47.997 Cannot find device "nvmf_init_if2" 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:47.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:47.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:47.997 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:48.263 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:48.263 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:48.263 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:48.263 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:48.263 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:48.263 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:48.263 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:48.263 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:48.263 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:48.263 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:48.263 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:18:48.263 00:18:48.263 --- 10.0.0.3 ping statistics --- 00:18:48.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.263 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:48.263 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:48.263 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:48.263 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:18:48.263 00:18:48.263 --- 10.0.0.4 ping statistics --- 00:18:48.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.263 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:18:48.263 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:48.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:18:48.263 00:18:48.263 --- 10.0.0.1 ping statistics --- 00:18:48.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.263 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:48.263 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:48.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:18:48.263 00:18:48.263 --- 10.0.0.2 ping statistics --- 00:18:48.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.263 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:18:48.263 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.263 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:18:48.263 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:48.263 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.263 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:48.263 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:48.263 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.264 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:48.264 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:48.264 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:18:48.264 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:48.264 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.264 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.264 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78370 00:18:48.264 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:18:48.264 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78370 00:18:48.264 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78370 ']' 00:18:48.264 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.264 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.264 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.264 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.264 10:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.522 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.522 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:18:48.522 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:48.522 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:48.522 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.780 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.780 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:18:48.780 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:18:48.780 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8ad921e837807b46231dd39bee1defb3 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.9My 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8ad921e837807b46231dd39bee1defb3 0 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8ad921e837807b46231dd39bee1defb3 0 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8ad921e837807b46231dd39bee1defb3 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.9My 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.9My 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.9My 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3d6b99df8769c281156177f2b6f09350b325796f9a8f754c509b0fc9917853cb 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.gri 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3d6b99df8769c281156177f2b6f09350b325796f9a8f754c509b0fc9917853cb 3 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3d6b99df8769c281156177f2b6f09350b325796f9a8f754c509b0fc9917853cb 3 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3d6b99df8769c281156177f2b6f09350b325796f9a8f754c509b0fc9917853cb 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.gri 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.gri 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.gri 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2f2481ec7b9282d32ec64337a53a09f4b244404607bb2552 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.WS5 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2f2481ec7b9282d32ec64337a53a09f4b244404607bb2552 0 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2f2481ec7b9282d32ec64337a53a09f4b244404607bb2552 0 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2f2481ec7b9282d32ec64337a53a09f4b244404607bb2552 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.WS5 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.WS5 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.WS5 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4265801a5f754352a3a6709a82f34c2b5e4f8b4da78ced02 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.YIr 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4265801a5f754352a3a6709a82f34c2b5e4f8b4da78ced02 2 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4265801a5f754352a3a6709a82f34c2b5e4f8b4da78ced02 2 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4265801a5f754352a3a6709a82f34c2b5e4f8b4da78ced02 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:18:48.781 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.YIr 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.YIr 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.YIr 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0daca94ee444cf1d57ea9f071bf6d1b8 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.8gF 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0daca94ee444cf1d57ea9f071bf6d1b8 1 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0daca94ee444cf1d57ea9f071bf6d1b8 1 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0daca94ee444cf1d57ea9f071bf6d1b8 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.8gF 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.8gF 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.8gF 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=caaf25d1d9059f125c04f676fa7c8c8f 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Oyt 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key caaf25d1d9059f125c04f676fa7c8c8f 1 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 caaf25d1d9059f125c04f676fa7c8c8f 1 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=caaf25d1d9059f125c04f676fa7c8c8f 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Oyt 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Oyt 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Oyt 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f0c2c7fc954fe4bdac1fcbb8772483bfa498a7cc9c2f1a69 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.FyH 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f0c2c7fc954fe4bdac1fcbb8772483bfa498a7cc9c2f1a69 2 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f0c2c7fc954fe4bdac1fcbb8772483bfa498a7cc9c2f1a69 2 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f0c2c7fc954fe4bdac1fcbb8772483bfa498a7cc9c2f1a69 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.FyH 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.FyH 00:18:49.040 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.FyH 00:18:49.041 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:18:49.041 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:49.041 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:49.041 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:49.041 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:49.041 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:49.041 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:49.041 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e8e5b7eb1347f2957dc8123666118d5a 00:18:49.041 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:49.041 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.aVQ 00:18:49.041 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e8e5b7eb1347f2957dc8123666118d5a 0 00:18:49.041 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e8e5b7eb1347f2957dc8123666118d5a 0 00:18:49.041 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:49.041 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:49.041 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e8e5b7eb1347f2957dc8123666118d5a 00:18:49.041 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:49.041 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:49.299 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.aVQ 00:18:49.299 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.aVQ 00:18:49.299 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.aVQ 00:18:49.299 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:18:49.299 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:49.299 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:49.299 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:49.299 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:18:49.299 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:18:49.299 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:49.299 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=23ae8c99cb4e54307aa286d74d720db3a2ca5d7604ce63fa721278a43beb0a06 00:18:49.299 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:49.299 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.mwG 00:18:49.299 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 23ae8c99cb4e54307aa286d74d720db3a2ca5d7604ce63fa721278a43beb0a06 3 00:18:49.299 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 23ae8c99cb4e54307aa286d74d720db3a2ca5d7604ce63fa721278a43beb0a06 3 00:18:49.299 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:49.299 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:49.299 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=23ae8c99cb4e54307aa286d74d720db3a2ca5d7604ce63fa721278a43beb0a06 00:18:49.299 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:18:49.299 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:49.299 10:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.mwG 00:18:49.299 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.mwG 00:18:49.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.299 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.mwG 00:18:49.299 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:18:49.299 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78370 00:18:49.299 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78370 ']' 00:18:49.299 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.299 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.299 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.299 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.299 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.9My 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.gri ]] 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gri 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.WS5 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.YIr ]] 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YIr 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.8gF 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Oyt ]] 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Oyt 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.FyH 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.aVQ ]] 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.aVQ 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.mwG 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:18:49.558 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:18:49.817 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:49.817 10:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:50.076 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:50.076 Waiting for block devices as requested 00:18:50.076 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:50.334 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:50.900 No valid GPT data, bailing 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:50.900 No valid GPT data, bailing 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:50.900 No valid GPT data, bailing 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:18:50.900 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:51.159 No valid GPT data, bailing 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid=6147973c-080a-4377-b1e7-85172bdc559a -a 10.0.0.1 -t tcp -s 4420 00:18:51.159 00:18:51.159 Discovery Log Number of Records 2, Generation counter 2 00:18:51.159 =====Discovery Log Entry 0====== 00:18:51.159 trtype: tcp 00:18:51.159 adrfam: ipv4 00:18:51.159 subtype: current discovery subsystem 00:18:51.159 treq: not specified, sq flow control disable supported 00:18:51.159 portid: 1 00:18:51.159 trsvcid: 4420 00:18:51.159 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:51.159 traddr: 10.0.0.1 00:18:51.159 eflags: none 00:18:51.159 sectype: none 00:18:51.159 =====Discovery Log Entry 1====== 00:18:51.159 trtype: tcp 00:18:51.159 adrfam: ipv4 00:18:51.159 subtype: nvme subsystem 00:18:51.159 treq: not specified, sq flow control disable supported 00:18:51.159 portid: 1 00:18:51.159 trsvcid: 4420 00:18:51.159 subnqn: nqn.2024-02.io.spdk:cnode0 00:18:51.159 traddr: 10.0.0.1 00:18:51.159 eflags: none 00:18:51.159 sectype: none 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: ]] 00:18:51.159 10:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.159 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.418 nvme0n1 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: ]] 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.418 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.676 nvme0n1 00:18:51.676 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.676 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.676 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.676 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.676 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.676 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.676 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.676 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.676 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.676 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.676 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.676 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.676 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:51.676 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.676 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:51.676 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: ]] 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.677 nvme0n1 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.677 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: ]] 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.936 nvme0n1 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.936 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: ]] 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.937 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.195 nvme0n1 00:18:52.195 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.195 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.195 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.195 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.195 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.195 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.195 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.195 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.195 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.195 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.195 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.195 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.195 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:18:52.195 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.196 10:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.196 nvme0n1 00:18:52.196 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.196 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.196 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.196 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.196 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.196 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.454 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.454 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.454 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.454 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.454 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.454 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.454 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.454 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:18:52.454 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.454 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:52.454 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:52.454 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:52.454 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:18:52.454 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:18:52.454 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:52.454 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: ]] 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.718 nvme0n1 00:18:52.718 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: ]] 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.976 nvme0n1 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.976 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.235 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.235 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.235 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.235 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.235 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:53.235 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:18:53.235 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.235 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:53.235 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:53.235 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:53.235 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:18:53.235 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:18:53.235 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:53.235 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:53.235 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:18:53.235 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: ]] 00:18:53.235 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:18:53.235 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:18:53.235 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:53.235 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:53.236 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:53.236 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:53.236 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:53.236 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:53.236 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.236 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.236 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.236 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:53.236 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:53.236 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:53.236 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:53.236 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.236 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.236 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:53.236 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.236 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:53.236 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:53.236 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:53.236 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.236 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.236 10:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.236 nvme0n1 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: ]] 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.236 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.495 nvme0n1 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.495 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.754 nvme0n1 00:18:53.754 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.754 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.754 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.754 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.754 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.754 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.754 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.754 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.754 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.754 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.754 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.754 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.754 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:53.754 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:18:53.754 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.754 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:53.754 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:53.754 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:53.754 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:18:53.754 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:18:53.754 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:53.754 10:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:54.321 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:18:54.321 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: ]] 00:18:54.321 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:18:54.321 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:18:54.321 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:54.321 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:54.321 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:54.321 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:54.321 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:54.321 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:54.321 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.321 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.322 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.322 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:54.322 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:54.322 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:54.322 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:54.322 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.322 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.322 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:54.322 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.322 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:54.322 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:54.322 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:54.322 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.322 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.322 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.581 nvme0n1 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: ]] 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:54.581 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.852 nvme0n1 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:18:54.852 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: ]] 00:18:55.110 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:18:55.110 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.111 nvme0n1 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.111 10:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: ]] 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:55.370 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.371 nvme0n1 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.371 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.630 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.630 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.630 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.630 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.630 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.630 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.631 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.889 nvme0n1 00:18:55.889 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.889 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:55.889 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.889 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.889 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.889 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.889 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.889 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.889 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.889 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.889 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.889 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.889 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:55.889 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:18:55.889 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:55.889 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:55.889 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:55.889 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:55.889 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:18:55.889 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:18:55.889 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:55.889 10:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:57.790 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:18:57.790 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: ]] 00:18:57.790 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:18:57.790 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:18:57.790 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:57.790 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:57.790 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:57.790 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:57.790 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:57.790 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:57.790 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.790 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.791 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.791 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:57.791 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:57.791 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:57.791 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:57.791 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.791 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.791 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:57.791 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.791 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:57.791 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:57.791 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:57.791 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.791 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.791 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.050 nvme0n1 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: ]] 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.050 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.051 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.051 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.051 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:58.051 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:58.051 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:58.051 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.051 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.051 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:58.051 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.051 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:58.051 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:58.051 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:58.051 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.051 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.051 10:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.617 nvme0n1 00:18:58.617 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.617 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.617 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.617 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.617 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.617 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.617 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.617 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.617 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.617 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.617 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.617 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: ]] 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.618 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.876 nvme0n1 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: ]] 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.877 10:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.445 nvme0n1 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.445 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.704 nvme0n1 00:18:59.704 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.704 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.704 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.704 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.704 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.704 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: ]] 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.962 10:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.528 nvme0n1 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: ]] 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:00.528 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:00.529 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.529 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.529 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.464 nvme0n1 00:19:01.464 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.464 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:01.464 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.464 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.464 10:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: ]] 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.464 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.030 nvme0n1 00:19:02.030 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.030 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:02.030 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.030 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.030 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:02.030 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.030 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.030 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:02.030 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.030 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.030 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.030 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:02.030 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:02.030 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:02.030 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:02.030 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:02.030 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:02.030 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: ]] 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.031 10:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.598 nvme0n1 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.598 10:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.561 nvme0n1 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: ]] 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.561 nvme0n1 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.561 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: ]] 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.846 nvme0n1 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:03.846 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: ]] 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.847 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.106 nvme0n1 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: ]] 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.106 nvme0n1 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.106 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.365 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.365 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:04.365 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:04.365 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:04.365 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:04.365 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.365 10:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.365 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:04.365 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.365 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:04.365 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:04.365 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:04.365 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:04.365 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.365 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.365 nvme0n1 00:19:04.365 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.365 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.365 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:04.365 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.365 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.365 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.365 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.365 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.365 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.365 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.365 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.365 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: ]] 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.366 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.624 nvme0n1 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: ]] 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.624 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.884 nvme0n1 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: ]] 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.884 nvme0n1 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.884 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.142 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.142 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.142 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.142 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.142 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.142 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:05.142 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:05.142 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:05.142 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:05.142 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:05.142 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:05.142 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:19:05.142 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:19:05.142 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:05.142 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:05.142 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:19:05.142 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: ]] 00:19:05.142 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.143 nvme0n1 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.143 10:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.143 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.143 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:05.143 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:05.143 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:05.143 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:05.143 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.143 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.143 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:05.143 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.143 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:05.143 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:05.143 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:05.143 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:05.143 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.143 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.402 nvme0n1 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: ]] 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.402 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.660 nvme0n1 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: ]] 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:05.660 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:05.661 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:05.661 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.661 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.661 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.919 nvme0n1 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: ]] 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.919 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.177 nvme0n1 00:19:06.177 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.177 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:06.177 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.177 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.177 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.177 10:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: ]] 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.177 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.434 nvme0n1 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.434 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.693 nvme0n1 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.693 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: ]] 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.952 10:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.209 nvme0n1 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: ]] 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:07.209 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.210 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.210 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.774 nvme0n1 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: ]] 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.774 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.340 nvme0n1 00:19:08.340 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.340 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.340 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:08.340 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.340 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.340 10:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: ]] 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.340 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.601 nvme0n1 00:19:08.601 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.601 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.601 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.601 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.601 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:08.601 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.859 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.859 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.859 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.859 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.859 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.859 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:08.859 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:08.859 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:08.859 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:08.859 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:08.859 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:08.859 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:19:08.859 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:08.859 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.860 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.119 nvme0n1 00:19:09.119 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.119 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:09.119 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.119 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:09.119 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.119 10:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.377 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.377 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:09.377 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.377 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.377 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.377 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.377 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:09.377 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:09.377 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:09.377 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:09.377 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:09.377 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:09.377 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:19:09.377 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: ]] 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.378 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.946 nvme0n1 00:19:09.946 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.946 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:09.946 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.946 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.946 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:09.946 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.946 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.946 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:09.946 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.946 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.946 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.946 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:09.946 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:09.946 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:09.946 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:09.946 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:09.946 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:09.946 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:09.946 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:09.946 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:09.946 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:09.947 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:09.947 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: ]] 00:19:10.206 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:10.206 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:19:10.206 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:10.206 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:10.206 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:10.206 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:10.206 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:10.206 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:10.206 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.206 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.206 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.206 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:10.206 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:10.206 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:10.206 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:10.206 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.206 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.206 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:10.206 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.206 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:10.207 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:10.207 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:10.207 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.207 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.207 10:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.775 nvme0n1 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: ]] 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.775 10:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.344 nvme0n1 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: ]] 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.344 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.603 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:11.603 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:11.603 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:11.603 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:11.603 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.603 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.603 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:11.603 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.603 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:11.603 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:11.603 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:11.603 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:11.603 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.603 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.171 nvme0n1 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:12.171 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.172 10:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.739 nvme0n1 00:19:12.739 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.739 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.739 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:12.739 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.739 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.739 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.739 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.739 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.739 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.739 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.739 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.739 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:12.739 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.739 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:12.739 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:12.739 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:12.739 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:12.739 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:12.739 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:12.739 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:19:12.740 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:19:12.740 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:12.740 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:12.740 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:19:12.740 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: ]] 00:19:12.740 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:19:12.740 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:19:12.740 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:12.740 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:12.740 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:12.740 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:12.740 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:12.740 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.740 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.740 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.740 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.740 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:12.740 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:12.740 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.999 nvme0n1 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: ]] 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.999 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.259 nvme0n1 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: ]] 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.259 10:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.259 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.259 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.259 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:13.259 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:13.259 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:13.259 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.259 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.259 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:13.259 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.259 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:13.259 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:13.259 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:13.259 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.259 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.259 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.259 nvme0n1 00:19:13.259 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.259 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.259 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:13.259 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.259 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.259 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: ]] 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.519 nvme0n1 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.519 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:13.520 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:13.520 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:13.520 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.520 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.520 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:13.520 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.520 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:13.520 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:13.520 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:13.520 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:13.520 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.520 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.779 nvme0n1 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: ]] 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.779 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.037 nvme0n1 00:19:14.037 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.037 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.037 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.037 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.037 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.037 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.037 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: ]] 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.038 nvme0n1 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.038 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: ]] 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:14.295 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.296 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:14.296 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.296 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.296 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.296 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.296 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:14.296 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:14.296 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:14.296 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.296 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.296 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:14.296 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.296 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:14.296 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:14.296 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:14.296 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.296 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.296 10:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.296 nvme0n1 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: ]] 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:14.296 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.555 nvme0n1 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:19:14.555 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.556 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.815 nvme0n1 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: ]] 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.815 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.816 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.816 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:14.816 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:14.816 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:14.816 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.816 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.816 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:14.816 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.816 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:14.816 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:14.816 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:14.816 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.816 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.816 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.075 nvme0n1 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: ]] 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.075 10:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.335 nvme0n1 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: ]] 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.335 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.594 nvme0n1 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: ]] 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.594 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.853 nvme0n1 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:15.853 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:16.111 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:16.111 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:16.111 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.111 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.111 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:16.111 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.111 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:16.111 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:16.111 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:16.111 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:16.111 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.111 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.111 nvme0n1 00:19:16.111 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.111 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.111 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.111 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.111 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.111 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.111 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.111 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.111 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.111 10:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.369 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.369 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.369 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.369 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:19:16.369 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.369 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:16.369 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:16.369 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:16.369 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:19:16.369 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: ]] 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.370 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.628 nvme0n1 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: ]] 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.628 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.886 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.886 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:16.886 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:16.886 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:16.886 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:16.886 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.886 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.886 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:16.886 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.886 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:16.886 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:16.886 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:16.886 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.886 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.886 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.145 nvme0n1 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: ]] 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.145 10:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.404 nvme0n1 00:19:17.404 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: ]] 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.662 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.921 nvme0n1 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.921 10:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.506 nvme0n1 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFkOTIxZTgzNzgwN2I0NjIzMWRkMzliZWUxZGVmYjNdK3DB: 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: ]] 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q2Yjk5ZGY4NzY5YzI4MTE1NjE3N2YyYjZmMDkzNTBiMzI1Nzk2ZjlhOGY3NTRjNTA5YjBmYzk5MTc4NTNjYl2bVbY=: 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:18.506 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:18.507 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.507 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.507 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:18.507 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.507 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:18.507 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:18.507 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:18.507 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.507 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.507 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.073 nvme0n1 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: ]] 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.073 10:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.008 nvme0n1 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: ]] 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.008 10:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.572 nvme0n1 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMmM3ZmM5NTRmZTRiZGFjMWZjYmI4NzcyNDgzYmZhNDk4YTdjYzljMmYxYTY5zRHXwA==: 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: ]] 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThlNWI3ZWIxMzQ3ZjI5NTdkYzgxMjM2NjYxMThkNWHnH08N: 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.572 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.138 nvme0n1 00:19:21.138 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.138 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.138 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:21.138 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.138 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.138 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.138 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.138 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.138 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.138 10:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.138 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.138 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:21.138 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:19:21.138 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:21.138 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:21.138 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:21.138 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:21.138 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:19:21.138 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:21.138 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:21.138 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:21.138 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjNhZThjOTljYjRlNTQzMDdhYTI4NmQ3NGQ3MjBkYjNhMmNhNWQ3NjA0Y2U2M2ZhNzIxMjc4YTQzYmViMGEwNhflod4=: 00:19:21.138 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:21.138 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:19:21.138 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:21.138 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:21.138 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:21.138 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:21.138 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:21.138 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:21.138 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.138 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.422 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.422 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:21.422 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:21.422 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:21.422 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:21.422 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.422 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.422 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:21.422 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:21.422 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:21.422 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:21.422 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:21.422 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:21.422 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.422 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.988 nvme0n1 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: ]] 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.988 request: 00:19:21.988 { 00:19:21.988 "name": "nvme0", 00:19:21.988 "trtype": "tcp", 00:19:21.988 "traddr": "10.0.0.1", 00:19:21.988 "adrfam": "ipv4", 00:19:21.988 "trsvcid": "4420", 00:19:21.988 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:21.988 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:21.988 "prchk_reftag": false, 00:19:21.988 "prchk_guard": false, 00:19:21.988 "hdgst": false, 00:19:21.988 "ddgst": false, 00:19:21.988 "allow_unrecognized_csi": false, 00:19:21.988 "method": "bdev_nvme_attach_controller", 00:19:21.988 "req_id": 1 00:19:21.988 } 00:19:21.988 Got JSON-RPC error response 00:19:21.988 response: 00:19:21.988 { 00:19:21.988 "code": -5, 00:19:21.988 "message": "Input/output error" 00:19:21.988 } 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:21.988 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:21.989 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:21.989 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:21.989 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:21.989 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:21.989 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.989 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:21.989 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.989 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:21.989 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.989 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.989 request: 00:19:21.989 { 00:19:21.989 "name": "nvme0", 00:19:21.989 "trtype": "tcp", 00:19:21.989 "traddr": "10.0.0.1", 00:19:21.989 "adrfam": "ipv4", 00:19:21.989 "trsvcid": "4420", 00:19:21.989 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:21.989 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:21.989 "prchk_reftag": false, 00:19:21.989 "prchk_guard": false, 00:19:21.989 "hdgst": false, 00:19:21.989 "ddgst": false, 00:19:21.989 "dhchap_key": "key2", 00:19:21.989 "allow_unrecognized_csi": false, 00:19:21.989 "method": "bdev_nvme_attach_controller", 00:19:21.989 "req_id": 1 00:19:21.989 } 00:19:21.989 Got JSON-RPC error response 00:19:21.989 response: 00:19:21.989 { 00:19:21.989 "code": -5, 00:19:21.989 "message": "Input/output error" 00:19:21.989 } 00:19:21.989 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:21.989 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:21.989 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:21.989 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:21.989 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:21.989 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.989 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:19:21.989 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.989 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.248 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.248 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:19:22.248 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:19:22.248 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:22.248 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:22.248 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:22.248 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.248 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.248 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:22.248 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:22.248 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:22.248 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:22.248 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:22.248 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:22.248 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.249 request: 00:19:22.249 { 00:19:22.249 "name": "nvme0", 00:19:22.249 "trtype": "tcp", 00:19:22.249 "traddr": "10.0.0.1", 00:19:22.249 "adrfam": "ipv4", 00:19:22.249 "trsvcid": "4420", 00:19:22.249 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:22.249 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:22.249 "prchk_reftag": false, 00:19:22.249 "prchk_guard": false, 00:19:22.249 "hdgst": false, 00:19:22.249 "ddgst": false, 00:19:22.249 "dhchap_key": "key1", 00:19:22.249 "dhchap_ctrlr_key": "ckey2", 00:19:22.249 "allow_unrecognized_csi": false, 00:19:22.249 "method": "bdev_nvme_attach_controller", 00:19:22.249 "req_id": 1 00:19:22.249 } 00:19:22.249 Got JSON-RPC error response 00:19:22.249 response: 00:19:22.249 { 00:19:22.249 "code": -5, 00:19:22.249 "message": "Input/output error" 00:19:22.249 } 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.249 10:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.249 nvme0n1 00:19:22.249 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.249 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:22.249 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:22.249 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:22.249 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:22.249 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:22.249 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:22.249 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:22.249 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:22.249 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:22.249 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:22.249 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: ]] 00:19:22.249 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:22.249 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.249 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.249 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.249 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.249 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.249 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.249 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:19:22.249 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.249 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.510 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.510 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:22.510 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:22.510 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:22.510 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:22.510 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.510 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:22.510 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.510 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:22.510 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.510 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.510 request: 00:19:22.510 { 00:19:22.510 "name": "nvme0", 00:19:22.510 "dhchap_key": "key1", 00:19:22.510 "dhchap_ctrlr_key": "ckey2", 00:19:22.510 "method": "bdev_nvme_set_keys", 00:19:22.510 "req_id": 1 00:19:22.510 } 00:19:22.510 Got JSON-RPC error response 00:19:22.510 response: 00:19:22.510 { 00:19:22.510 "code": -13, 00:19:22.510 "message": "Permission denied" 00:19:22.510 } 00:19:22.510 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:22.510 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:22.510 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:22.510 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:22.510 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:22.510 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.510 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.510 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:19:22.510 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.510 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.510 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:19:22.510 10:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYyNDgxZWM3YjkyODJkMzJlYzY0MzM3YTUzYTA5ZjRiMjQ0NDA0NjA3YmIyNTUymukAjQ==: 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: ]] 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI2NTgwMWE1Zjc1NDM1MmEzYTY3MDlhODJmMzRjMmI1ZTRmOGI0ZGE3OGNlZDAyagSu9A==: 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.444 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.702 nvme0n1 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGRhY2E5NGVlNDQ0Y2YxZDU3ZWE5ZjA3MWJmNmQxYjit5ku2: 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: ]] 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FhZjI1ZDFkOTA1OWYxMjVjMDRmNjc2ZmE3YzhjOGbG/9sK: 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.702 request: 00:19:23.702 { 00:19:23.702 "name": "nvme0", 00:19:23.702 "dhchap_key": "key2", 00:19:23.702 "dhchap_ctrlr_key": "ckey1", 00:19:23.702 "method": "bdev_nvme_set_keys", 00:19:23.702 "req_id": 1 00:19:23.702 } 00:19:23.702 Got JSON-RPC error response 00:19:23.702 response: 00:19:23.702 { 00:19:23.702 "code": -13, 00:19:23.702 "message": "Permission denied" 00:19:23.702 } 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:19:23.702 10:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:19:24.637 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.637 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:19:24.637 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.637 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.637 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:24.895 rmmod nvme_tcp 00:19:24.895 rmmod nvme_fabrics 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78370 ']' 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78370 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78370 ']' 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78370 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78370 00:19:24.895 killing process with pid 78370 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78370' 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78370 00:19:24.895 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78370 00:19:25.153 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:25.153 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:25.153 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:25.153 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:19:25.153 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:19:25.153 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:25.153 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:19:25.153 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:25.153 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:25.153 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:25.153 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:25.153 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:25.153 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:25.153 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:25.153 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:25.153 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:25.153 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:25.153 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:25.153 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:25.153 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:25.153 10:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:25.153 10:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:25.153 10:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:25.153 10:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.153 10:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:25.153 10:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.411 10:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:19:25.411 10:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:25.411 10:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:25.411 10:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:19:25.411 10:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:19:25.411 10:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:19:25.411 10:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:25.411 10:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:25.411 10:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:25.411 10:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:25.411 10:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:19:25.411 10:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:19:25.411 10:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:25.974 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:26.233 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:26.233 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:26.233 10:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.9My /tmp/spdk.key-null.WS5 /tmp/spdk.key-sha256.8gF /tmp/spdk.key-sha384.FyH /tmp/spdk.key-sha512.mwG /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:19:26.233 10:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:26.491 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:26.491 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:26.491 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:26.491 00:19:26.491 real 0m39.081s 00:19:26.491 user 0m35.035s 00:19:26.491 sys 0m3.894s 00:19:26.491 10:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.491 10:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.491 ************************************ 00:19:26.491 END TEST nvmf_auth_host 00:19:26.491 ************************************ 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.751 ************************************ 00:19:26.751 START TEST nvmf_digest 00:19:26.751 ************************************ 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:26.751 * Looking for test storage... 00:19:26.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:26.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.751 --rc genhtml_branch_coverage=1 00:19:26.751 --rc genhtml_function_coverage=1 00:19:26.751 --rc genhtml_legend=1 00:19:26.751 --rc geninfo_all_blocks=1 00:19:26.751 --rc geninfo_unexecuted_blocks=1 00:19:26.751 00:19:26.751 ' 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:26.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.751 --rc genhtml_branch_coverage=1 00:19:26.751 --rc genhtml_function_coverage=1 00:19:26.751 --rc genhtml_legend=1 00:19:26.751 --rc geninfo_all_blocks=1 00:19:26.751 --rc geninfo_unexecuted_blocks=1 00:19:26.751 00:19:26.751 ' 00:19:26.751 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:26.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.751 --rc genhtml_branch_coverage=1 00:19:26.751 --rc genhtml_function_coverage=1 00:19:26.751 --rc genhtml_legend=1 00:19:26.751 --rc geninfo_all_blocks=1 00:19:26.751 --rc geninfo_unexecuted_blocks=1 00:19:26.751 00:19:26.751 ' 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:26.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.752 --rc genhtml_branch_coverage=1 00:19:26.752 --rc genhtml_function_coverage=1 00:19:26.752 --rc genhtml_legend=1 00:19:26.752 --rc geninfo_all_blocks=1 00:19:26.752 --rc geninfo_unexecuted_blocks=1 00:19:26.752 00:19:26.752 ' 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:26.752 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:26.752 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:27.011 Cannot find device "nvmf_init_br" 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:27.011 Cannot find device "nvmf_init_br2" 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:27.011 Cannot find device "nvmf_tgt_br" 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:27.011 Cannot find device "nvmf_tgt_br2" 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:27.011 Cannot find device "nvmf_init_br" 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:27.011 Cannot find device "nvmf_init_br2" 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:27.011 Cannot find device "nvmf_tgt_br" 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:19:27.011 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:27.011 Cannot find device "nvmf_tgt_br2" 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:27.012 Cannot find device "nvmf_br" 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:27.012 Cannot find device "nvmf_init_if" 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:27.012 Cannot find device "nvmf_init_if2" 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:27.012 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:27.012 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:27.012 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:27.270 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:27.270 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:27.270 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:27.270 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:27.270 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:27.270 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:27.270 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:27.270 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:27.270 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:27.270 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:27.270 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:27.270 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:27.270 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:27.270 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:27.270 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:27.270 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 00:19:27.270 00:19:27.270 --- 10.0.0.3 ping statistics --- 00:19:27.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.270 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:19:27.270 10:13:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:27.270 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:27.270 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:19:27.270 00:19:27.270 --- 10.0.0.4 ping statistics --- 00:19:27.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.270 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:27.270 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:27.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:27.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:19:27.270 00:19:27.270 --- 10.0.0.1 ping statistics --- 00:19:27.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.270 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:19:27.270 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:27.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:27.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:19:27.270 00:19:27.270 --- 10.0.0.2 ping statistics --- 00:19:27.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.270 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:27.270 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:27.270 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:19:27.270 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:27.270 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:27.270 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:27.270 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:27.270 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:27.270 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:27.270 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:27.270 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:27.270 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:19:27.270 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:19:27.270 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:27.271 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:27.271 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:27.271 ************************************ 00:19:27.271 START TEST nvmf_digest_clean 00:19:27.271 ************************************ 00:19:27.271 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:19:27.271 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:19:27.271 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:19:27.271 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:19:27.271 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:19:27.271 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:19:27.271 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:27.271 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:27.271 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:27.271 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=80041 00:19:27.271 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:27.271 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 80041 00:19:27.271 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80041 ']' 00:19:27.271 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.271 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.271 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.271 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.271 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:27.271 [2024-11-19 10:13:41.108944] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:19:27.271 [2024-11-19 10:13:41.109047] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.530 [2024-11-19 10:13:41.253710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.530 [2024-11-19 10:13:41.313246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.530 [2024-11-19 10:13:41.313303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.530 [2024-11-19 10:13:41.313318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.530 [2024-11-19 10:13:41.313327] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.530 [2024-11-19 10:13:41.313334] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.530 [2024-11-19 10:13:41.313722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.530 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.530 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:27.530 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:27.530 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:27.530 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:27.530 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:27.530 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:19:27.530 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:19:27.530 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:19:27.530 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.530 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:27.790 [2024-11-19 10:13:41.451313] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:27.790 null0 00:19:27.790 [2024-11-19 10:13:41.502856] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.790 [2024-11-19 10:13:41.527007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:27.790 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.790 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:19:27.790 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:27.790 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:27.790 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:27.790 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:27.790 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:27.790 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:27.790 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80061 00:19:27.790 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:27.790 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80061 /var/tmp/bperf.sock 00:19:27.790 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80061 ']' 00:19:27.790 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:27.790 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:27.790 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:27.790 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.790 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:27.790 [2024-11-19 10:13:41.600560] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:19:27.790 [2024-11-19 10:13:41.600713] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80061 ] 00:19:28.052 [2024-11-19 10:13:41.756724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.052 [2024-11-19 10:13:41.824304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.052 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.052 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:28.052 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:28.052 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:28.052 10:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:28.619 [2024-11-19 10:13:42.289425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:28.619 10:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:28.619 10:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:28.877 nvme0n1 00:19:28.877 10:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:28.877 10:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:29.136 Running I/O for 2 seconds... 00:19:31.006 14859.00 IOPS, 58.04 MiB/s [2024-11-19T10:13:44.895Z] 14795.50 IOPS, 57.79 MiB/s 00:19:31.006 Latency(us) 00:19:31.006 [2024-11-19T10:13:44.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.006 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:31.006 nvme0n1 : 2.01 14811.11 57.86 0.00 0.00 8634.94 7923.90 18350.08 00:19:31.006 [2024-11-19T10:13:44.895Z] =================================================================================================================== 00:19:31.006 [2024-11-19T10:13:44.895Z] Total : 14811.11 57.86 0.00 0.00 8634.94 7923.90 18350.08 00:19:31.006 { 00:19:31.006 "results": [ 00:19:31.006 { 00:19:31.006 "job": "nvme0n1", 00:19:31.006 "core_mask": "0x2", 00:19:31.006 "workload": "randread", 00:19:31.006 "status": "finished", 00:19:31.006 "queue_depth": 128, 00:19:31.006 "io_size": 4096, 00:19:31.006 "runtime": 2.006534, 00:19:31.006 "iops": 14811.11209677982, 00:19:31.006 "mibps": 57.85590662804617, 00:19:31.006 "io_failed": 0, 00:19:31.006 "io_timeout": 0, 00:19:31.006 "avg_latency_us": 8634.94320339911, 00:19:31.006 "min_latency_us": 7923.898181818182, 00:19:31.006 "max_latency_us": 18350.08 00:19:31.006 } 00:19:31.006 ], 00:19:31.006 "core_count": 1 00:19:31.006 } 00:19:31.006 10:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:31.006 10:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:31.006 10:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:31.006 10:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:31.006 10:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:31.006 | select(.opcode=="crc32c") 00:19:31.006 | "\(.module_name) \(.executed)"' 00:19:31.572 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:31.572 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:31.572 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:31.572 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:31.572 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80061 00:19:31.572 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80061 ']' 00:19:31.572 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80061 00:19:31.572 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:31.572 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.572 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80061 00:19:31.572 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:31.572 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:31.572 killing process with pid 80061 00:19:31.572 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80061' 00:19:31.572 Received shutdown signal, test time was about 2.000000 seconds 00:19:31.572 00:19:31.572 Latency(us) 00:19:31.572 [2024-11-19T10:13:45.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.572 [2024-11-19T10:13:45.461Z] =================================================================================================================== 00:19:31.572 [2024-11-19T10:13:45.461Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:31.572 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80061 00:19:31.572 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80061 00:19:31.838 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:19:31.838 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:31.838 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:31.838 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:31.838 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:31.838 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:31.838 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:31.838 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80113 00:19:31.838 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:31.838 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80113 /var/tmp/bperf.sock 00:19:31.838 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80113 ']' 00:19:31.838 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:31.838 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:31.838 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:31.838 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.838 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:31.838 [2024-11-19 10:13:45.511438] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:19:31.838 [2024-11-19 10:13:45.511553] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80113 ] 00:19:31.838 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:31.838 Zero copy mechanism will not be used. 00:19:31.838 [2024-11-19 10:13:45.652454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.838 [2024-11-19 10:13:45.709750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.097 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.097 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:32.097 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:32.097 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:32.097 10:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:32.356 [2024-11-19 10:13:46.098687] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:32.356 10:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:32.356 10:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:32.614 nvme0n1 00:19:32.614 10:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:32.614 10:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:32.873 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:32.873 Zero copy mechanism will not be used. 00:19:32.873 Running I/O for 2 seconds... 00:19:34.742 7648.00 IOPS, 956.00 MiB/s [2024-11-19T10:13:48.631Z] 7448.00 IOPS, 931.00 MiB/s 00:19:34.742 Latency(us) 00:19:34.742 [2024-11-19T10:13:48.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.742 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:34.742 nvme0n1 : 2.00 7447.50 930.94 0.00 0.00 2144.79 1906.50 3753.43 00:19:34.742 [2024-11-19T10:13:48.631Z] =================================================================================================================== 00:19:34.742 [2024-11-19T10:13:48.631Z] Total : 7447.50 930.94 0.00 0.00 2144.79 1906.50 3753.43 00:19:34.742 { 00:19:34.742 "results": [ 00:19:34.742 { 00:19:34.742 "job": "nvme0n1", 00:19:34.742 "core_mask": "0x2", 00:19:34.742 "workload": "randread", 00:19:34.742 "status": "finished", 00:19:34.742 "queue_depth": 16, 00:19:34.742 "io_size": 131072, 00:19:34.742 "runtime": 2.002283, 00:19:34.742 "iops": 7447.498680256487, 00:19:34.742 "mibps": 930.9373350320609, 00:19:34.742 "io_failed": 0, 00:19:34.742 "io_timeout": 0, 00:19:34.742 "avg_latency_us": 2144.7895747171283, 00:19:34.742 "min_latency_us": 1906.5018181818182, 00:19:34.742 "max_latency_us": 3753.4254545454546 00:19:34.742 } 00:19:34.742 ], 00:19:34.742 "core_count": 1 00:19:34.742 } 00:19:34.742 10:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:34.742 10:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:34.742 10:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:34.742 10:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:34.742 10:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:34.742 | select(.opcode=="crc32c") 00:19:34.742 | "\(.module_name) \(.executed)"' 00:19:35.309 10:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:35.309 10:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:35.309 10:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:35.309 10:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:35.309 10:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80113 00:19:35.309 10:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80113 ']' 00:19:35.309 10:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80113 00:19:35.309 10:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:35.309 10:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.309 10:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80113 00:19:35.309 10:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:35.309 10:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:35.309 killing process with pid 80113 00:19:35.309 10:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80113' 00:19:35.309 10:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80113 00:19:35.309 Received shutdown signal, test time was about 2.000000 seconds 00:19:35.309 00:19:35.309 Latency(us) 00:19:35.309 [2024-11-19T10:13:49.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.310 [2024-11-19T10:13:49.199Z] =================================================================================================================== 00:19:35.310 [2024-11-19T10:13:49.199Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:35.310 10:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80113 00:19:35.310 10:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:19:35.310 10:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:35.310 10:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:35.310 10:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:35.310 10:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:35.310 10:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:35.310 10:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:35.310 10:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:35.310 10:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80166 00:19:35.310 10:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80166 /var/tmp/bperf.sock 00:19:35.310 10:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80166 ']' 00:19:35.310 10:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:35.310 10:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:35.310 10:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:35.310 10:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.310 10:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:35.310 [2024-11-19 10:13:49.187198] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:19:35.310 [2024-11-19 10:13:49.187284] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80166 ] 00:19:35.568 [2024-11-19 10:13:49.328427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.568 [2024-11-19 10:13:49.385985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.568 10:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.568 10:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:35.568 10:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:35.568 10:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:35.568 10:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:36.133 [2024-11-19 10:13:49.726863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:36.133 10:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:36.134 10:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:36.391 nvme0n1 00:19:36.391 10:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:36.391 10:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:36.391 Running I/O for 2 seconds... 00:19:38.334 16003.00 IOPS, 62.51 MiB/s [2024-11-19T10:13:52.482Z] 16066.00 IOPS, 62.76 MiB/s 00:19:38.593 Latency(us) 00:19:38.593 [2024-11-19T10:13:52.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.593 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:38.593 nvme0n1 : 2.01 16058.40 62.73 0.00 0.00 7964.91 2591.65 16086.11 00:19:38.593 [2024-11-19T10:13:52.482Z] =================================================================================================================== 00:19:38.593 [2024-11-19T10:13:52.482Z] Total : 16058.40 62.73 0.00 0.00 7964.91 2591.65 16086.11 00:19:38.593 { 00:19:38.593 "results": [ 00:19:38.593 { 00:19:38.593 "job": "nvme0n1", 00:19:38.593 "core_mask": "0x2", 00:19:38.593 "workload": "randwrite", 00:19:38.593 "status": "finished", 00:19:38.593 "queue_depth": 128, 00:19:38.593 "io_size": 4096, 00:19:38.593 "runtime": 2.008917, 00:19:38.593 "iops": 16058.403607515891, 00:19:38.593 "mibps": 62.72813909185895, 00:19:38.593 "io_failed": 0, 00:19:38.593 "io_timeout": 0, 00:19:38.593 "avg_latency_us": 7964.907814912923, 00:19:38.593 "min_latency_us": 2591.650909090909, 00:19:38.593 "max_latency_us": 16086.10909090909 00:19:38.593 } 00:19:38.593 ], 00:19:38.593 "core_count": 1 00:19:38.593 } 00:19:38.593 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:38.593 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:38.593 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:38.593 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:38.593 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:38.593 | select(.opcode=="crc32c") 00:19:38.593 | "\(.module_name) \(.executed)"' 00:19:38.851 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:38.851 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:38.851 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:38.851 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:38.851 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80166 00:19:38.851 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80166 ']' 00:19:38.851 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80166 00:19:38.851 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:38.851 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.851 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80166 00:19:38.851 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:38.852 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:38.852 killing process with pid 80166 00:19:38.852 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80166' 00:19:38.852 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80166 00:19:38.852 Received shutdown signal, test time was about 2.000000 seconds 00:19:38.852 00:19:38.852 Latency(us) 00:19:38.852 [2024-11-19T10:13:52.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.852 [2024-11-19T10:13:52.741Z] =================================================================================================================== 00:19:38.852 [2024-11-19T10:13:52.741Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:38.852 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80166 00:19:39.110 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:19:39.110 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:39.110 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:39.110 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:39.110 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:39.110 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:39.111 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:39.111 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80223 00:19:39.111 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80223 /var/tmp/bperf.sock 00:19:39.111 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80223 ']' 00:19:39.111 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:39.111 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:39.111 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.111 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:39.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:39.111 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.111 10:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:39.111 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:39.111 Zero copy mechanism will not be used. 00:19:39.111 [2024-11-19 10:13:52.845617] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:19:39.111 [2024-11-19 10:13:52.845751] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80223 ] 00:19:39.369 [2024-11-19 10:13:53.004750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.369 [2024-11-19 10:13:53.063397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.369 10:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.369 10:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:39.369 10:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:39.369 10:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:39.369 10:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:39.628 [2024-11-19 10:13:53.480286] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:39.886 10:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:39.886 10:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:40.144 nvme0n1 00:19:40.144 10:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:40.144 10:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:40.144 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:40.144 Zero copy mechanism will not be used. 00:19:40.144 Running I/O for 2 seconds... 00:19:42.176 6417.00 IOPS, 802.12 MiB/s [2024-11-19T10:13:56.065Z] 6432.00 IOPS, 804.00 MiB/s 00:19:42.176 Latency(us) 00:19:42.176 [2024-11-19T10:13:56.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.176 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:42.176 nvme0n1 : 2.00 6427.33 803.42 0.00 0.00 2483.75 1966.08 10724.07 00:19:42.176 [2024-11-19T10:13:56.065Z] =================================================================================================================== 00:19:42.176 [2024-11-19T10:13:56.065Z] Total : 6427.33 803.42 0.00 0.00 2483.75 1966.08 10724.07 00:19:42.176 { 00:19:42.176 "results": [ 00:19:42.176 { 00:19:42.176 "job": "nvme0n1", 00:19:42.176 "core_mask": "0x2", 00:19:42.176 "workload": "randwrite", 00:19:42.176 "status": "finished", 00:19:42.176 "queue_depth": 16, 00:19:42.176 "io_size": 131072, 00:19:42.176 "runtime": 2.003788, 00:19:42.176 "iops": 6427.326643337518, 00:19:42.176 "mibps": 803.4158304171898, 00:19:42.176 "io_failed": 0, 00:19:42.176 "io_timeout": 0, 00:19:42.176 "avg_latency_us": 2483.746698572024, 00:19:42.176 "min_latency_us": 1966.08, 00:19:42.176 "max_latency_us": 10724.072727272727 00:19:42.176 } 00:19:42.176 ], 00:19:42.176 "core_count": 1 00:19:42.176 } 00:19:42.176 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:42.176 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:42.176 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:42.176 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:42.176 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:42.176 | select(.opcode=="crc32c") 00:19:42.176 | "\(.module_name) \(.executed)"' 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80223 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80223 ']' 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80223 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80223 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:42.744 killing process with pid 80223 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80223' 00:19:42.744 Received shutdown signal, test time was about 2.000000 seconds 00:19:42.744 00:19:42.744 Latency(us) 00:19:42.744 [2024-11-19T10:13:56.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.744 [2024-11-19T10:13:56.633Z] =================================================================================================================== 00:19:42.744 [2024-11-19T10:13:56.633Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80223 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80223 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80041 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80041 ']' 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80041 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80041 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:42.744 killing process with pid 80041 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80041' 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80041 00:19:42.744 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80041 00:19:43.004 00:19:43.004 real 0m15.757s 00:19:43.004 user 0m31.065s 00:19:43.004 sys 0m4.537s 00:19:43.004 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:43.004 ************************************ 00:19:43.004 END TEST nvmf_digest_clean 00:19:43.004 ************************************ 00:19:43.004 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:43.004 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:19:43.004 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:43.004 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:43.004 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:43.004 ************************************ 00:19:43.004 START TEST nvmf_digest_error 00:19:43.004 ************************************ 00:19:43.004 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:19:43.004 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:19:43.004 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:43.004 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:43.004 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:43.004 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80295 00:19:43.004 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80295 00:19:43.004 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80295 ']' 00:19:43.004 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:43.004 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.004 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:43.004 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.004 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:43.004 10:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:43.263 [2024-11-19 10:13:56.917172] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:19:43.263 [2024-11-19 10:13:56.917275] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.263 [2024-11-19 10:13:57.072900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.263 [2024-11-19 10:13:57.140348] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.263 [2024-11-19 10:13:57.140412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.263 [2024-11-19 10:13:57.140426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:43.263 [2024-11-19 10:13:57.140436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:43.263 [2024-11-19 10:13:57.140445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.263 [2024-11-19 10:13:57.140954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.197 10:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.197 10:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:44.197 10:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:44.197 10:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:44.197 10:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:44.197 10:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.197 10:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:19:44.197 10:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.197 10:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:44.197 [2024-11-19 10:13:57.921551] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:19:44.197 10:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.198 10:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:19:44.198 10:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:19:44.198 10:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.198 10:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:44.198 [2024-11-19 10:13:57.982184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:44.198 null0 00:19:44.198 [2024-11-19 10:13:58.034494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.198 [2024-11-19 10:13:58.058644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:44.198 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.198 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:19:44.198 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:44.198 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:44.198 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:44.198 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:44.198 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80333 00:19:44.198 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:19:44.198 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80333 /var/tmp/bperf.sock 00:19:44.198 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80333 ']' 00:19:44.198 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:44.198 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:44.198 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:44.198 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.198 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:44.456 [2024-11-19 10:13:58.127847] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:19:44.456 [2024-11-19 10:13:58.127981] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80333 ] 00:19:44.456 [2024-11-19 10:13:58.278195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.456 [2024-11-19 10:13:58.342229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.715 [2024-11-19 10:13:58.396293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:44.715 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.715 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:44.715 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:44.715 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:44.974 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:44.974 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.974 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:44.974 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.974 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:44.974 10:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:45.232 nvme0n1 00:19:45.490 10:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:45.490 10:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.490 10:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:45.490 10:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.490 10:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:45.490 10:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:45.490 Running I/O for 2 seconds... 00:19:45.490 [2024-11-19 10:13:59.340502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:45.490 [2024-11-19 10:13:59.340557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.490 [2024-11-19 10:13:59.340572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.490 [2024-11-19 10:13:59.357779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:45.490 [2024-11-19 10:13:59.357849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.490 [2024-11-19 10:13:59.357864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.490 [2024-11-19 10:13:59.375042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:45.490 [2024-11-19 10:13:59.375085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.490 [2024-11-19 10:13:59.375100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.750 [2024-11-19 10:13:59.392286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:45.750 [2024-11-19 10:13:59.392347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.750 [2024-11-19 10:13:59.392361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.750 [2024-11-19 10:13:59.409536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:45.750 [2024-11-19 10:13:59.409578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.750 [2024-11-19 10:13:59.409593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.750 [2024-11-19 10:13:59.426684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:45.750 [2024-11-19 10:13:59.426720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.750 [2024-11-19 10:13:59.426733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.750 [2024-11-19 10:13:59.443780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:45.750 [2024-11-19 10:13:59.443821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.750 [2024-11-19 10:13:59.443834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.750 [2024-11-19 10:13:59.460879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:45.750 [2024-11-19 10:13:59.460930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.750 [2024-11-19 10:13:59.460945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.750 [2024-11-19 10:13:59.479246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:45.750 [2024-11-19 10:13:59.479313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.750 [2024-11-19 10:13:59.479329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.750 [2024-11-19 10:13:59.496907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:45.750 [2024-11-19 10:13:59.496967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.750 [2024-11-19 10:13:59.496983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.750 [2024-11-19 10:13:59.514203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:45.750 [2024-11-19 10:13:59.514246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.750 [2024-11-19 10:13:59.514260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.750 [2024-11-19 10:13:59.531364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:45.750 [2024-11-19 10:13:59.531408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.750 [2024-11-19 10:13:59.531422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.750 [2024-11-19 10:13:59.548802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:45.750 [2024-11-19 10:13:59.548863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.750 [2024-11-19 10:13:59.548878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.750 [2024-11-19 10:13:59.566315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:45.750 [2024-11-19 10:13:59.566379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.750 [2024-11-19 10:13:59.566394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.750 [2024-11-19 10:13:59.583527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:45.750 [2024-11-19 10:13:59.583566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.750 [2024-11-19 10:13:59.583582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.750 [2024-11-19 10:13:59.600726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:45.750 [2024-11-19 10:13:59.600767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.750 [2024-11-19 10:13:59.600780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.750 [2024-11-19 10:13:59.617811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:45.750 [2024-11-19 10:13:59.617849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.750 [2024-11-19 10:13:59.617862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.750 [2024-11-19 10:13:59.634879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:45.750 [2024-11-19 10:13:59.634928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.750 [2024-11-19 10:13:59.634942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.021 [2024-11-19 10:13:59.651968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.021 [2024-11-19 10:13:59.652007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.021 [2024-11-19 10:13:59.652021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.021 [2024-11-19 10:13:59.669048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.021 [2024-11-19 10:13:59.669085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.021 [2024-11-19 10:13:59.669099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.021 [2024-11-19 10:13:59.686525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.021 [2024-11-19 10:13:59.686594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.021 [2024-11-19 10:13:59.686619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.021 [2024-11-19 10:13:59.703971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.021 [2024-11-19 10:13:59.704029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.021 [2024-11-19 10:13:59.704043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.021 [2024-11-19 10:13:59.721201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.021 [2024-11-19 10:13:59.721239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.021 [2024-11-19 10:13:59.721253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.021 [2024-11-19 10:13:59.738271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.021 [2024-11-19 10:13:59.738307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.021 [2024-11-19 10:13:59.738320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.021 [2024-11-19 10:13:59.755364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.021 [2024-11-19 10:13:59.755401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.021 [2024-11-19 10:13:59.755414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.021 [2024-11-19 10:13:59.772456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.021 [2024-11-19 10:13:59.772495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.021 [2024-11-19 10:13:59.772508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.021 [2024-11-19 10:13:59.789569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.021 [2024-11-19 10:13:59.789612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.021 [2024-11-19 10:13:59.789626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.021 [2024-11-19 10:13:59.806716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.021 [2024-11-19 10:13:59.806753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.021 [2024-11-19 10:13:59.806766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.021 [2024-11-19 10:13:59.823755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.021 [2024-11-19 10:13:59.823792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.021 [2024-11-19 10:13:59.823805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.021 [2024-11-19 10:13:59.840804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.021 [2024-11-19 10:13:59.840841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.021 [2024-11-19 10:13:59.840855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.021 [2024-11-19 10:13:59.857881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.021 [2024-11-19 10:13:59.857933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.021 [2024-11-19 10:13:59.857948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.021 [2024-11-19 10:13:59.875290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.021 [2024-11-19 10:13:59.875328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.021 [2024-11-19 10:13:59.875341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.021 [2024-11-19 10:13:59.892502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.021 [2024-11-19 10:13:59.892540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.021 [2024-11-19 10:13:59.892553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.279 [2024-11-19 10:13:59.909619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.279 [2024-11-19 10:13:59.909657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.279 [2024-11-19 10:13:59.909671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.279 [2024-11-19 10:13:59.927026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.279 [2024-11-19 10:13:59.927066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.279 [2024-11-19 10:13:59.927081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.279 [2024-11-19 10:13:59.944254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.279 [2024-11-19 10:13:59.944293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.279 [2024-11-19 10:13:59.944306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.279 [2024-11-19 10:13:59.961496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.279 [2024-11-19 10:13:59.961539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.279 [2024-11-19 10:13:59.961553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.279 [2024-11-19 10:13:59.979199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.279 [2024-11-19 10:13:59.979241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.279 [2024-11-19 10:13:59.979254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.279 [2024-11-19 10:13:59.996402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.279 [2024-11-19 10:13:59.996440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.279 [2024-11-19 10:13:59.996454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.279 [2024-11-19 10:14:00.013517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.279 [2024-11-19 10:14:00.013558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.279 [2024-11-19 10:14:00.013571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.279 [2024-11-19 10:14:00.030681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.279 [2024-11-19 10:14:00.030716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.279 [2024-11-19 10:14:00.030729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.279 [2024-11-19 10:14:00.047787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.279 [2024-11-19 10:14:00.047825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.280 [2024-11-19 10:14:00.047838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.280 [2024-11-19 10:14:00.064885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.280 [2024-11-19 10:14:00.064935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.280 [2024-11-19 10:14:00.064949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.280 [2024-11-19 10:14:00.081944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.280 [2024-11-19 10:14:00.081980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.280 [2024-11-19 10:14:00.081993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.280 [2024-11-19 10:14:00.099032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.280 [2024-11-19 10:14:00.099068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.280 [2024-11-19 10:14:00.099082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.280 [2024-11-19 10:14:00.116190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.280 [2024-11-19 10:14:00.116229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.280 [2024-11-19 10:14:00.116242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.280 [2024-11-19 10:14:00.133313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.280 [2024-11-19 10:14:00.133348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.280 [2024-11-19 10:14:00.133361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.280 [2024-11-19 10:14:00.150391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.280 [2024-11-19 10:14:00.150427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.280 [2024-11-19 10:14:00.150440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.280 [2024-11-19 10:14:00.167415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.280 [2024-11-19 10:14:00.167452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.280 [2024-11-19 10:14:00.167465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.539 [2024-11-19 10:14:00.184506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.539 [2024-11-19 10:14:00.184543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.539 [2024-11-19 10:14:00.184555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.539 [2024-11-19 10:14:00.201926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.539 [2024-11-19 10:14:00.201974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.539 [2024-11-19 10:14:00.201987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.539 [2024-11-19 10:14:00.219119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.539 [2024-11-19 10:14:00.219171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.539 [2024-11-19 10:14:00.219185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.539 [2024-11-19 10:14:00.236289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.539 [2024-11-19 10:14:00.236327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.539 [2024-11-19 10:14:00.236340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.539 [2024-11-19 10:14:00.253728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.539 [2024-11-19 10:14:00.253769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.539 [2024-11-19 10:14:00.253782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.539 [2024-11-19 10:14:00.271487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.539 [2024-11-19 10:14:00.271539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.539 [2024-11-19 10:14:00.271553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.539 [2024-11-19 10:14:00.288651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.539 [2024-11-19 10:14:00.288687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.539 [2024-11-19 10:14:00.288700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.539 [2024-11-19 10:14:00.306079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.539 [2024-11-19 10:14:00.306117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.539 [2024-11-19 10:14:00.306130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.539 14548.00 IOPS, 56.83 MiB/s [2024-11-19T10:14:00.428Z] [2024-11-19 10:14:00.324398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.539 [2024-11-19 10:14:00.324457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.539 [2024-11-19 10:14:00.324471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.539 [2024-11-19 10:14:00.341517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.539 [2024-11-19 10:14:00.341554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.539 [2024-11-19 10:14:00.341566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.539 [2024-11-19 10:14:00.358593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.539 [2024-11-19 10:14:00.358632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.539 [2024-11-19 10:14:00.358645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.539 [2024-11-19 10:14:00.375847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.539 [2024-11-19 10:14:00.375883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.539 [2024-11-19 10:14:00.375896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.539 [2024-11-19 10:14:00.393640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.539 [2024-11-19 10:14:00.393700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.539 [2024-11-19 10:14:00.393714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.539 [2024-11-19 10:14:00.410804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.539 [2024-11-19 10:14:00.410846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.539 [2024-11-19 10:14:00.410860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.798 [2024-11-19 10:14:00.436112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.798 [2024-11-19 10:14:00.436166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.798 [2024-11-19 10:14:00.436181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.798 [2024-11-19 10:14:00.453892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.798 [2024-11-19 10:14:00.453944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.798 [2024-11-19 10:14:00.453958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.798 [2024-11-19 10:14:00.470974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.798 [2024-11-19 10:14:00.471011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.798 [2024-11-19 10:14:00.471024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.798 [2024-11-19 10:14:00.488777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.798 [2024-11-19 10:14:00.488834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.798 [2024-11-19 10:14:00.488857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.798 [2024-11-19 10:14:00.506564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.798 [2024-11-19 10:14:00.506616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.799 [2024-11-19 10:14:00.506631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.799 [2024-11-19 10:14:00.525948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.799 [2024-11-19 10:14:00.525989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.799 [2024-11-19 10:14:00.526004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.799 [2024-11-19 10:14:00.543077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.799 [2024-11-19 10:14:00.543116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.799 [2024-11-19 10:14:00.543130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.799 [2024-11-19 10:14:00.560513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.799 [2024-11-19 10:14:00.560556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.799 [2024-11-19 10:14:00.560571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.799 [2024-11-19 10:14:00.577586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.799 [2024-11-19 10:14:00.577625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.799 [2024-11-19 10:14:00.577639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.799 [2024-11-19 10:14:00.594621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.799 [2024-11-19 10:14:00.594658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.799 [2024-11-19 10:14:00.594672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.799 [2024-11-19 10:14:00.611710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.799 [2024-11-19 10:14:00.611749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.799 [2024-11-19 10:14:00.611762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.799 [2024-11-19 10:14:00.628785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.799 [2024-11-19 10:14:00.628823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.799 [2024-11-19 10:14:00.628836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.799 [2024-11-19 10:14:00.645851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.799 [2024-11-19 10:14:00.645889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.799 [2024-11-19 10:14:00.645903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.799 [2024-11-19 10:14:00.663651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.799 [2024-11-19 10:14:00.663702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.799 [2024-11-19 10:14:00.663724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.799 [2024-11-19 10:14:00.682705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:46.799 [2024-11-19 10:14:00.682755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.799 [2024-11-19 10:14:00.682781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.058 [2024-11-19 10:14:00.701941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.058 [2024-11-19 10:14:00.701992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.058 [2024-11-19 10:14:00.702014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.058 [2024-11-19 10:14:00.720960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.058 [2024-11-19 10:14:00.721001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.058 [2024-11-19 10:14:00.721015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.058 [2024-11-19 10:14:00.740739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.058 [2024-11-19 10:14:00.740775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.058 [2024-11-19 10:14:00.740787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.058 [2024-11-19 10:14:00.758339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.058 [2024-11-19 10:14:00.758375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.058 [2024-11-19 10:14:00.758387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.058 [2024-11-19 10:14:00.775418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.058 [2024-11-19 10:14:00.775454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.058 [2024-11-19 10:14:00.775467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.058 [2024-11-19 10:14:00.792446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.058 [2024-11-19 10:14:00.792485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.058 [2024-11-19 10:14:00.792498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.058 [2024-11-19 10:14:00.809534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.058 [2024-11-19 10:14:00.809570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.058 [2024-11-19 10:14:00.809583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.058 [2024-11-19 10:14:00.826631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.058 [2024-11-19 10:14:00.826670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.058 [2024-11-19 10:14:00.826683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.058 [2024-11-19 10:14:00.843634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.058 [2024-11-19 10:14:00.843671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.058 [2024-11-19 10:14:00.843684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.058 [2024-11-19 10:14:00.860649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.058 [2024-11-19 10:14:00.860686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.058 [2024-11-19 10:14:00.860699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.058 [2024-11-19 10:14:00.877675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.058 [2024-11-19 10:14:00.877715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.058 [2024-11-19 10:14:00.877728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.058 [2024-11-19 10:14:00.894735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.058 [2024-11-19 10:14:00.894770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.058 [2024-11-19 10:14:00.894783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.058 [2024-11-19 10:14:00.911802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.058 [2024-11-19 10:14:00.911838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.058 [2024-11-19 10:14:00.911851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.058 [2024-11-19 10:14:00.928830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.058 [2024-11-19 10:14:00.928868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.058 [2024-11-19 10:14:00.928880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.058 [2024-11-19 10:14:00.945868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.058 [2024-11-19 10:14:00.945906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.058 [2024-11-19 10:14:00.945933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.318 [2024-11-19 10:14:00.962952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.318 [2024-11-19 10:14:00.962998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.318 [2024-11-19 10:14:00.963011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.318 [2024-11-19 10:14:00.980064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.318 [2024-11-19 10:14:00.980098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.318 [2024-11-19 10:14:00.980110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.318 [2024-11-19 10:14:00.997196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.318 [2024-11-19 10:14:00.997234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.318 [2024-11-19 10:14:00.997246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.318 [2024-11-19 10:14:01.014423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.318 [2024-11-19 10:14:01.014460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.318 [2024-11-19 10:14:01.014472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.318 [2024-11-19 10:14:01.031514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.318 [2024-11-19 10:14:01.031549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.318 [2024-11-19 10:14:01.031563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.318 [2024-11-19 10:14:01.048654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.318 [2024-11-19 10:14:01.048691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.318 [2024-11-19 10:14:01.048703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.318 [2024-11-19 10:14:01.065743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.318 [2024-11-19 10:14:01.065779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.318 [2024-11-19 10:14:01.065792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.318 [2024-11-19 10:14:01.082782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.318 [2024-11-19 10:14:01.082818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.318 [2024-11-19 10:14:01.082830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.318 [2024-11-19 10:14:01.099793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.318 [2024-11-19 10:14:01.099830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.318 [2024-11-19 10:14:01.099843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.318 [2024-11-19 10:14:01.116880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.318 [2024-11-19 10:14:01.116924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.318 [2024-11-19 10:14:01.116939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.318 [2024-11-19 10:14:01.134039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.318 [2024-11-19 10:14:01.134075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.318 [2024-11-19 10:14:01.134088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.318 [2024-11-19 10:14:01.151159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.318 [2024-11-19 10:14:01.151193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.318 [2024-11-19 10:14:01.151205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.318 [2024-11-19 10:14:01.168244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.318 [2024-11-19 10:14:01.168270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.318 [2024-11-19 10:14:01.168289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.318 [2024-11-19 10:14:01.185235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.318 [2024-11-19 10:14:01.185269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.318 [2024-11-19 10:14:01.185282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.318 [2024-11-19 10:14:01.202253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.318 [2024-11-19 10:14:01.202289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.318 [2024-11-19 10:14:01.202303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.577 [2024-11-19 10:14:01.219298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.577 [2024-11-19 10:14:01.219334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.577 [2024-11-19 10:14:01.219357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.577 [2024-11-19 10:14:01.236387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.578 [2024-11-19 10:14:01.236423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.578 [2024-11-19 10:14:01.236437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.578 [2024-11-19 10:14:01.253469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.578 [2024-11-19 10:14:01.253505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.578 [2024-11-19 10:14:01.253518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.578 [2024-11-19 10:14:01.270613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.578 [2024-11-19 10:14:01.270655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.578 [2024-11-19 10:14:01.270669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.578 [2024-11-19 10:14:01.287706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.578 [2024-11-19 10:14:01.287743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.578 [2024-11-19 10:14:01.287757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.578 [2024-11-19 10:14:01.304834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.578 [2024-11-19 10:14:01.304871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.578 [2024-11-19 10:14:01.304884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.578 14548.00 IOPS, 56.83 MiB/s [2024-11-19T10:14:01.467Z] [2024-11-19 10:14:01.323214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b3540) 00:19:47.578 [2024-11-19 10:14:01.323252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.578 [2024-11-19 10:14:01.323265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.578 00:19:47.578 Latency(us) 00:19:47.578 [2024-11-19T10:14:01.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.578 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:47.578 nvme0n1 : 2.01 14582.51 56.96 0.00 0.00 8769.84 8162.21 34555.35 00:19:47.578 [2024-11-19T10:14:01.467Z] =================================================================================================================== 00:19:47.578 [2024-11-19T10:14:01.467Z] Total : 14582.51 56.96 0.00 0.00 8769.84 8162.21 34555.35 00:19:47.578 { 00:19:47.578 "results": [ 00:19:47.578 { 00:19:47.578 "job": "nvme0n1", 00:19:47.578 "core_mask": "0x2", 00:19:47.578 "workload": "randread", 00:19:47.578 "status": "finished", 00:19:47.578 "queue_depth": 128, 00:19:47.578 "io_size": 4096, 00:19:47.578 "runtime": 2.012754, 00:19:47.578 "iops": 14582.507350625065, 00:19:47.578 "mibps": 56.96291933837916, 00:19:47.578 "io_failed": 0, 00:19:47.578 "io_timeout": 0, 00:19:47.578 "avg_latency_us": 8769.844017332538, 00:19:47.578 "min_latency_us": 8162.210909090909, 00:19:47.578 "max_latency_us": 34555.34545454545 00:19:47.578 } 00:19:47.578 ], 00:19:47.578 "core_count": 1 00:19:47.578 } 00:19:47.578 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:47.578 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:47.578 | .driver_specific 00:19:47.578 | .nvme_error 00:19:47.578 | .status_code 00:19:47.578 | .command_transient_transport_error' 00:19:47.578 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:47.578 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:47.836 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 115 > 0 )) 00:19:47.836 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80333 00:19:47.836 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80333 ']' 00:19:47.836 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80333 00:19:48.095 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:48.095 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.095 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80333 00:19:48.095 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:48.095 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:48.095 killing process with pid 80333 00:19:48.095 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80333' 00:19:48.095 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80333 00:19:48.095 Received shutdown signal, test time was about 2.000000 seconds 00:19:48.095 00:19:48.095 Latency(us) 00:19:48.095 [2024-11-19T10:14:01.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.095 [2024-11-19T10:14:01.984Z] =================================================================================================================== 00:19:48.095 [2024-11-19T10:14:01.984Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:48.095 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80333 00:19:48.095 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:19:48.095 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:48.095 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:48.095 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:48.095 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:48.095 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80380 00:19:48.095 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80380 /var/tmp/bperf.sock 00:19:48.095 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80380 ']' 00:19:48.095 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:48.095 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.095 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:19:48.095 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:48.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:48.095 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.095 10:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:48.354 [2024-11-19 10:14:02.016204] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:19:48.354 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:48.354 Zero copy mechanism will not be used. 00:19:48.354 [2024-11-19 10:14:02.016303] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80380 ] 00:19:48.354 [2024-11-19 10:14:02.166516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.354 [2024-11-19 10:14:02.234730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.612 [2024-11-19 10:14:02.292192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:48.612 10:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.612 10:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:48.612 10:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:48.612 10:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:48.872 10:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:48.872 10:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.872 10:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:48.872 10:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.872 10:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:48.872 10:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:49.130 nvme0n1 00:19:49.389 10:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:49.389 10:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.389 10:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:49.389 10:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.389 10:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:49.389 10:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:49.389 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:49.389 Zero copy mechanism will not be used. 00:19:49.389 Running I/O for 2 seconds... 00:19:49.389 [2024-11-19 10:14:03.198544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.389 [2024-11-19 10:14:03.198600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.389 [2024-11-19 10:14:03.198616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.389 [2024-11-19 10:14:03.202879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.389 [2024-11-19 10:14:03.202929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.389 [2024-11-19 10:14:03.202944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.389 [2024-11-19 10:14:03.207224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.389 [2024-11-19 10:14:03.207261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.389 [2024-11-19 10:14:03.207275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.389 [2024-11-19 10:14:03.211600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.389 [2024-11-19 10:14:03.211640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.389 [2024-11-19 10:14:03.211654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.389 [2024-11-19 10:14:03.215958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.389 [2024-11-19 10:14:03.215995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.389 [2024-11-19 10:14:03.216008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.389 [2024-11-19 10:14:03.220284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.389 [2024-11-19 10:14:03.220323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.389 [2024-11-19 10:14:03.220335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.389 [2024-11-19 10:14:03.224582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.389 [2024-11-19 10:14:03.224619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.389 [2024-11-19 10:14:03.224632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.389 [2024-11-19 10:14:03.228899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.389 [2024-11-19 10:14:03.228948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.389 [2024-11-19 10:14:03.228962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.389 [2024-11-19 10:14:03.233159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.389 [2024-11-19 10:14:03.233194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.389 [2024-11-19 10:14:03.233217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.389 [2024-11-19 10:14:03.237478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.389 [2024-11-19 10:14:03.237515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.389 [2024-11-19 10:14:03.237528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.389 [2024-11-19 10:14:03.241778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.389 [2024-11-19 10:14:03.241815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.389 [2024-11-19 10:14:03.241828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.389 [2024-11-19 10:14:03.246082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.390 [2024-11-19 10:14:03.246119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.390 [2024-11-19 10:14:03.246132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.390 [2024-11-19 10:14:03.250403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.390 [2024-11-19 10:14:03.250440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.390 [2024-11-19 10:14:03.250453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.390 [2024-11-19 10:14:03.254696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.390 [2024-11-19 10:14:03.254733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.390 [2024-11-19 10:14:03.254747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.390 [2024-11-19 10:14:03.259054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.390 [2024-11-19 10:14:03.259090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.390 [2024-11-19 10:14:03.259102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.390 [2024-11-19 10:14:03.263297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.390 [2024-11-19 10:14:03.263333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.390 [2024-11-19 10:14:03.263345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.390 [2024-11-19 10:14:03.267595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.390 [2024-11-19 10:14:03.267634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.390 [2024-11-19 10:14:03.267647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.390 [2024-11-19 10:14:03.271954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.390 [2024-11-19 10:14:03.271991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.390 [2024-11-19 10:14:03.272003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.390 [2024-11-19 10:14:03.276231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.390 [2024-11-19 10:14:03.276271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.390 [2024-11-19 10:14:03.276284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.650 [2024-11-19 10:14:03.280531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.650 [2024-11-19 10:14:03.280570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.650 [2024-11-19 10:14:03.280583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.650 [2024-11-19 10:14:03.284838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.650 [2024-11-19 10:14:03.284876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.650 [2024-11-19 10:14:03.284889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.650 [2024-11-19 10:14:03.289221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.650 [2024-11-19 10:14:03.289260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.650 [2024-11-19 10:14:03.289274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.650 [2024-11-19 10:14:03.293591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.650 [2024-11-19 10:14:03.293628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.650 [2024-11-19 10:14:03.293641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.650 [2024-11-19 10:14:03.297996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.650 [2024-11-19 10:14:03.298035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.650 [2024-11-19 10:14:03.298048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.650 [2024-11-19 10:14:03.302340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.650 [2024-11-19 10:14:03.302379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.650 [2024-11-19 10:14:03.302392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.650 [2024-11-19 10:14:03.306672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.650 [2024-11-19 10:14:03.306709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.650 [2024-11-19 10:14:03.306722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.650 [2024-11-19 10:14:03.310991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.650 [2024-11-19 10:14:03.311027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.650 [2024-11-19 10:14:03.311040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.650 [2024-11-19 10:14:03.315277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.650 [2024-11-19 10:14:03.315334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.650 [2024-11-19 10:14:03.315347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.650 [2024-11-19 10:14:03.319500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.650 [2024-11-19 10:14:03.319538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.650 [2024-11-19 10:14:03.319551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.650 [2024-11-19 10:14:03.323769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.650 [2024-11-19 10:14:03.323804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.650 [2024-11-19 10:14:03.323817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.650 [2024-11-19 10:14:03.328024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.650 [2024-11-19 10:14:03.328061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.650 [2024-11-19 10:14:03.328074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.650 [2024-11-19 10:14:03.332317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.650 [2024-11-19 10:14:03.332356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.650 [2024-11-19 10:14:03.332368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.650 [2024-11-19 10:14:03.336626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.650 [2024-11-19 10:14:03.336662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.650 [2024-11-19 10:14:03.336675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.650 [2024-11-19 10:14:03.340988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.650 [2024-11-19 10:14:03.341023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.650 [2024-11-19 10:14:03.341036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.651 [2024-11-19 10:14:03.345269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.651 [2024-11-19 10:14:03.345306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.651 [2024-11-19 10:14:03.345319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.651 [2024-11-19 10:14:03.349568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.651 [2024-11-19 10:14:03.349604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.651 [2024-11-19 10:14:03.349617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.651 [2024-11-19 10:14:03.353884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.651 [2024-11-19 10:14:03.353940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.651 [2024-11-19 10:14:03.353954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.651 [2024-11-19 10:14:03.358171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.651 [2024-11-19 10:14:03.358210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.651 [2024-11-19 10:14:03.358223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.651 [2024-11-19 10:14:03.362471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.651 [2024-11-19 10:14:03.362507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.651 [2024-11-19 10:14:03.362519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.651 [2024-11-19 10:14:03.366786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.651 [2024-11-19 10:14:03.366823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.651 [2024-11-19 10:14:03.366835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.651 [2024-11-19 10:14:03.371076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.651 [2024-11-19 10:14:03.371111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.651 [2024-11-19 10:14:03.371125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.651 [2024-11-19 10:14:03.375357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.651 [2024-11-19 10:14:03.375392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.651 [2024-11-19 10:14:03.375404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.651 [2024-11-19 10:14:03.379657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.651 [2024-11-19 10:14:03.379694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.651 [2024-11-19 10:14:03.379707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.651 [2024-11-19 10:14:03.383957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.651 [2024-11-19 10:14:03.383992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.651 [2024-11-19 10:14:03.384005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.651 [2024-11-19 10:14:03.388217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.651 [2024-11-19 10:14:03.388253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.651 [2024-11-19 10:14:03.388266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.651 [2024-11-19 10:14:03.392497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.651 [2024-11-19 10:14:03.392533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.651 [2024-11-19 10:14:03.392545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.651 [2024-11-19 10:14:03.396781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.651 [2024-11-19 10:14:03.396819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.651 [2024-11-19 10:14:03.396832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.651 [2024-11-19 10:14:03.401082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.651 [2024-11-19 10:14:03.401118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.651 [2024-11-19 10:14:03.401130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.651 [2024-11-19 10:14:03.405334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.651 [2024-11-19 10:14:03.405370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.651 [2024-11-19 10:14:03.405382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.651 [2024-11-19 10:14:03.409599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.651 [2024-11-19 10:14:03.409635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.651 [2024-11-19 10:14:03.409648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.651 [2024-11-19 10:14:03.413853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.651 [2024-11-19 10:14:03.413891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.651 [2024-11-19 10:14:03.413904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.651 [2024-11-19 10:14:03.418142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.651 [2024-11-19 10:14:03.418178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.651 [2024-11-19 10:14:03.418190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.651 [2024-11-19 10:14:03.422381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.651 [2024-11-19 10:14:03.422417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.651 [2024-11-19 10:14:03.422429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.651 [2024-11-19 10:14:03.426645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.651 [2024-11-19 10:14:03.426682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.651 [2024-11-19 10:14:03.426695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.651 [2024-11-19 10:14:03.430969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.652 [2024-11-19 10:14:03.431004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.652 [2024-11-19 10:14:03.431017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.652 [2024-11-19 10:14:03.435230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.652 [2024-11-19 10:14:03.435270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.652 [2024-11-19 10:14:03.435283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.652 [2024-11-19 10:14:03.439495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.652 [2024-11-19 10:14:03.439530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.652 [2024-11-19 10:14:03.439543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.652 [2024-11-19 10:14:03.443782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.652 [2024-11-19 10:14:03.443818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.652 [2024-11-19 10:14:03.443831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.652 [2024-11-19 10:14:03.448081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.652 [2024-11-19 10:14:03.448116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.652 [2024-11-19 10:14:03.448128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.652 [2024-11-19 10:14:03.452372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.652 [2024-11-19 10:14:03.452408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.652 [2024-11-19 10:14:03.452421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.652 [2024-11-19 10:14:03.456638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.652 [2024-11-19 10:14:03.456674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.652 [2024-11-19 10:14:03.456686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.652 [2024-11-19 10:14:03.460970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.652 [2024-11-19 10:14:03.461006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.652 [2024-11-19 10:14:03.461018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.652 [2024-11-19 10:14:03.465208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.652 [2024-11-19 10:14:03.465244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.652 [2024-11-19 10:14:03.465257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.652 [2024-11-19 10:14:03.469479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.652 [2024-11-19 10:14:03.469515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.652 [2024-11-19 10:14:03.469528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.652 [2024-11-19 10:14:03.473744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.652 [2024-11-19 10:14:03.473780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.652 [2024-11-19 10:14:03.473793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.652 [2024-11-19 10:14:03.478035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.652 [2024-11-19 10:14:03.478071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.652 [2024-11-19 10:14:03.478083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.652 [2024-11-19 10:14:03.482298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.652 [2024-11-19 10:14:03.482335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.652 [2024-11-19 10:14:03.482348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.652 [2024-11-19 10:14:03.486593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.652 [2024-11-19 10:14:03.486630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.652 [2024-11-19 10:14:03.486642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.652 [2024-11-19 10:14:03.490975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.652 [2024-11-19 10:14:03.491009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.652 [2024-11-19 10:14:03.491023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.652 [2024-11-19 10:14:03.495328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.652 [2024-11-19 10:14:03.495365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.652 [2024-11-19 10:14:03.495378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.652 [2024-11-19 10:14:03.499641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.652 [2024-11-19 10:14:03.499678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.652 [2024-11-19 10:14:03.499692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.652 [2024-11-19 10:14:03.503897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.652 [2024-11-19 10:14:03.503944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.652 [2024-11-19 10:14:03.503958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.652 [2024-11-19 10:14:03.508099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.652 [2024-11-19 10:14:03.508136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.652 [2024-11-19 10:14:03.508149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.652 [2024-11-19 10:14:03.512393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.652 [2024-11-19 10:14:03.512430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.652 [2024-11-19 10:14:03.512443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.652 [2024-11-19 10:14:03.516649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.652 [2024-11-19 10:14:03.516688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.652 [2024-11-19 10:14:03.516700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.652 [2024-11-19 10:14:03.520951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.652 [2024-11-19 10:14:03.520988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.653 [2024-11-19 10:14:03.521000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.653 [2024-11-19 10:14:03.525228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.653 [2024-11-19 10:14:03.525268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.653 [2024-11-19 10:14:03.525280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.653 [2024-11-19 10:14:03.529487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.653 [2024-11-19 10:14:03.529525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.653 [2024-11-19 10:14:03.529539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.653 [2024-11-19 10:14:03.533766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.653 [2024-11-19 10:14:03.533805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.653 [2024-11-19 10:14:03.533817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.912 [2024-11-19 10:14:03.538075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.912 [2024-11-19 10:14:03.538114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.538127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.542309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.542353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.542366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.546664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.546701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.546713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.551034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.551070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.551083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.555352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.555388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.555401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.559635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.559674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.559688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.563952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.563989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.564001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.568248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.568286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.568298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.572515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.572556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.572568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.576800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.576838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.576851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.581119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.581156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.581169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.585403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.585441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.585455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.589702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.589739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.589752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.594017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.594054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.594067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.598278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.598316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.598329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.602552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.602589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.602602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.606852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.606891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.606903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.611171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.611209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.611222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.615472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.615510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.615523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.619746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.619783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.619796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.624028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.624065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.624078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.628318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.628354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.628367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.632616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.632653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.632666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.636895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.636947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.636961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.641155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.641192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.641205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.645418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.645455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.645467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.649730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.913 [2024-11-19 10:14:03.649766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.913 [2024-11-19 10:14:03.649778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.913 [2024-11-19 10:14:03.653982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.654019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.654032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.658246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.658284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.658297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.662527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.662563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.662576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.666808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.666844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.666857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.671081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.671117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.671130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.675277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.675313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.675325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.679526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.679565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.679579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.683796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.683832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.683844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.688069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.688104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.688116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.692388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.692423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.692436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.696674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.696710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.696722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.700976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.701011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.701024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.705228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.705264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.705277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.709490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.709525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.709538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.713804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.713841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.713854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.718107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.718142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.718155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.722391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.722427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.722440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.726659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.726694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.726707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.730935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.730970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.731001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.735187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.735224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.735237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.739501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.739537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.739550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.743765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.743802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.743815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.748016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.748052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.748066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.752311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.752355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.752368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.756581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.756617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.756630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.760857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.760892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.914 [2024-11-19 10:14:03.760905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.914 [2024-11-19 10:14:03.765156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.914 [2024-11-19 10:14:03.765191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.915 [2024-11-19 10:14:03.765204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.915 [2024-11-19 10:14:03.769406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.915 [2024-11-19 10:14:03.769441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.915 [2024-11-19 10:14:03.769454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.915 [2024-11-19 10:14:03.773698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.915 [2024-11-19 10:14:03.773734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.915 [2024-11-19 10:14:03.773747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.915 [2024-11-19 10:14:03.777959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.915 [2024-11-19 10:14:03.777995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.915 [2024-11-19 10:14:03.778008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.915 [2024-11-19 10:14:03.782203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.915 [2024-11-19 10:14:03.782241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.915 [2024-11-19 10:14:03.782253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:49.915 [2024-11-19 10:14:03.786492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.915 [2024-11-19 10:14:03.786529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.915 [2024-11-19 10:14:03.786541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.915 [2024-11-19 10:14:03.790795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.915 [2024-11-19 10:14:03.790833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.915 [2024-11-19 10:14:03.790845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:49.915 [2024-11-19 10:14:03.795073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.915 [2024-11-19 10:14:03.795108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.915 [2024-11-19 10:14:03.795120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:49.915 [2024-11-19 10:14:03.799341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:49.915 [2024-11-19 10:14:03.799377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.915 [2024-11-19 10:14:03.799389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.175 [2024-11-19 10:14:03.803568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.175 [2024-11-19 10:14:03.803604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.175 [2024-11-19 10:14:03.803617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.175 [2024-11-19 10:14:03.807827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.175 [2024-11-19 10:14:03.807861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.175 [2024-11-19 10:14:03.807873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.175 [2024-11-19 10:14:03.812206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.175 [2024-11-19 10:14:03.812242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.175 [2024-11-19 10:14:03.812256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.175 [2024-11-19 10:14:03.816542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.175 [2024-11-19 10:14:03.816579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.175 [2024-11-19 10:14:03.816592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.175 [2024-11-19 10:14:03.820770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.175 [2024-11-19 10:14:03.820810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.175 [2024-11-19 10:14:03.820823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.175 [2024-11-19 10:14:03.825053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.175 [2024-11-19 10:14:03.825089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.175 [2024-11-19 10:14:03.825102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.175 [2024-11-19 10:14:03.829281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.175 [2024-11-19 10:14:03.829319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.175 [2024-11-19 10:14:03.829331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.175 [2024-11-19 10:14:03.833573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.175 [2024-11-19 10:14:03.833609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.175 [2024-11-19 10:14:03.833621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.175 [2024-11-19 10:14:03.837881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.175 [2024-11-19 10:14:03.837932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.175 [2024-11-19 10:14:03.837946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.175 [2024-11-19 10:14:03.842190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.175 [2024-11-19 10:14:03.842226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.175 [2024-11-19 10:14:03.842240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.175 [2024-11-19 10:14:03.846449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.175 [2024-11-19 10:14:03.846486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.846498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.850722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.850757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.850770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.855068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.855103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.855115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.859391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.859426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.859439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.863719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.863754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.863767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.868022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.868057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.868069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.872320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.872355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.872368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.876598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.876635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.876647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.880881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.880931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.880945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.885166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.885202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.885215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.889406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.889443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.889456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.893709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.893745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.893758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.897972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.898008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.898020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.902224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.902259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.902272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.906506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.906542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.906555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.910781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.910818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.910830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.915116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.915153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.915166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.919381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.919417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.919430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.923667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.923703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.923716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.927901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.927946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.927959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.932187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.932221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.932234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.936428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.936462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.936474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.940683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.940719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.940731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.945011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.945046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.945058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.949250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.949284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.949297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.953514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.953551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.953564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.957788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.957825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.176 [2024-11-19 10:14:03.957838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.176 [2024-11-19 10:14:03.962012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.176 [2024-11-19 10:14:03.962048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:03.962061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.177 [2024-11-19 10:14:03.966249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.177 [2024-11-19 10:14:03.966286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:03.966298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.177 [2024-11-19 10:14:03.970512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.177 [2024-11-19 10:14:03.970549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:03.970562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.177 [2024-11-19 10:14:03.974762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.177 [2024-11-19 10:14:03.974797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:03.974810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.177 [2024-11-19 10:14:03.979058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.177 [2024-11-19 10:14:03.979092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:03.979105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.177 [2024-11-19 10:14:03.983392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.177 [2024-11-19 10:14:03.983428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:03.983441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.177 [2024-11-19 10:14:03.987669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.177 [2024-11-19 10:14:03.987703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:03.987715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.177 [2024-11-19 10:14:03.992012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.177 [2024-11-19 10:14:03.992046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:03.992058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.177 [2024-11-19 10:14:03.996307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.177 [2024-11-19 10:14:03.996341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:03.996354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.177 [2024-11-19 10:14:04.000606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.177 [2024-11-19 10:14:04.000641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:04.000654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.177 [2024-11-19 10:14:04.004862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.177 [2024-11-19 10:14:04.004905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:04.004932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.177 [2024-11-19 10:14:04.009167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.177 [2024-11-19 10:14:04.009203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:04.009216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.177 [2024-11-19 10:14:04.013478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.177 [2024-11-19 10:14:04.013510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:04.013523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.177 [2024-11-19 10:14:04.017770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.177 [2024-11-19 10:14:04.017808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:04.017821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.177 [2024-11-19 10:14:04.022051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.177 [2024-11-19 10:14:04.022087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:04.022100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.177 [2024-11-19 10:14:04.026474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.177 [2024-11-19 10:14:04.026511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:04.026524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.177 [2024-11-19 10:14:04.030733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.177 [2024-11-19 10:14:04.030767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:04.030779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.177 [2024-11-19 10:14:04.035024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.177 [2024-11-19 10:14:04.035072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:04.035085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.177 [2024-11-19 10:14:04.039281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.177 [2024-11-19 10:14:04.039314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:04.039327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.177 [2024-11-19 10:14:04.043608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.177 [2024-11-19 10:14:04.043644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:04.043656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.177 [2024-11-19 10:14:04.047840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.177 [2024-11-19 10:14:04.047874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:04.047886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.177 [2024-11-19 10:14:04.052039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.177 [2024-11-19 10:14:04.052075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:04.052088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.177 [2024-11-19 10:14:04.056343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.177 [2024-11-19 10:14:04.056379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:04.056392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.177 [2024-11-19 10:14:04.060616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.177 [2024-11-19 10:14:04.060651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.177 [2024-11-19 10:14:04.060664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.438 [2024-11-19 10:14:04.064891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.438 [2024-11-19 10:14:04.064937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.438 [2024-11-19 10:14:04.064951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.438 [2024-11-19 10:14:04.069203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.438 [2024-11-19 10:14:04.069238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.438 [2024-11-19 10:14:04.069251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.438 [2024-11-19 10:14:04.073547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.438 [2024-11-19 10:14:04.073581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.438 [2024-11-19 10:14:04.073595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.438 [2024-11-19 10:14:04.077899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.438 [2024-11-19 10:14:04.077945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.438 [2024-11-19 10:14:04.077958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.438 [2024-11-19 10:14:04.082125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.438 [2024-11-19 10:14:04.082160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.438 [2024-11-19 10:14:04.082173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.438 [2024-11-19 10:14:04.086435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.438 [2024-11-19 10:14:04.086471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.438 [2024-11-19 10:14:04.086484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.438 [2024-11-19 10:14:04.090735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.438 [2024-11-19 10:14:04.090770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.438 [2024-11-19 10:14:04.090782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.438 [2024-11-19 10:14:04.095017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.438 [2024-11-19 10:14:04.095053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.438 [2024-11-19 10:14:04.095066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.438 [2024-11-19 10:14:04.099271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.438 [2024-11-19 10:14:04.099306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.438 [2024-11-19 10:14:04.099319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.438 [2024-11-19 10:14:04.103539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.438 [2024-11-19 10:14:04.103578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.438 [2024-11-19 10:14:04.103591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.438 [2024-11-19 10:14:04.107863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.438 [2024-11-19 10:14:04.107901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.438 [2024-11-19 10:14:04.107928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.438 [2024-11-19 10:14:04.112184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.438 [2024-11-19 10:14:04.112219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.438 [2024-11-19 10:14:04.112231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.438 [2024-11-19 10:14:04.116455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.438 [2024-11-19 10:14:04.116490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.438 [2024-11-19 10:14:04.116503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.438 [2024-11-19 10:14:04.120739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.438 [2024-11-19 10:14:04.120773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.438 [2024-11-19 10:14:04.120785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.438 [2024-11-19 10:14:04.125040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.438 [2024-11-19 10:14:04.125075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.438 [2024-11-19 10:14:04.125088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.438 [2024-11-19 10:14:04.129301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.438 [2024-11-19 10:14:04.129337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.438 [2024-11-19 10:14:04.129350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.438 [2024-11-19 10:14:04.133598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.438 [2024-11-19 10:14:04.133634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.133647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.137858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.137894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.137906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.142080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.142117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.142130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.146336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.146373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.146386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.150575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.150613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.150626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.154840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.154878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.154891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.159104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.159139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.159152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.163371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.163406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.163419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.167601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.167636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.167649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.171848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.171886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.171899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.176070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.176104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.176117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.180374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.180409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.180422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.184689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.184727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.184740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.188986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.189023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.189036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.193265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.193301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.193314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.439 7223.00 IOPS, 902.88 MiB/s [2024-11-19T10:14:04.328Z] [2024-11-19 10:14:04.198497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.198533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.198546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.202762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.202797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.202810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.206969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.207003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.207016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.211169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.211205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.211217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.215378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.215413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.215426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.219635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.219670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.219683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.223926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.223959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.223971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.228193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.228228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.228241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.232494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.232530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.232542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.236809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.236845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.236859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.241084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.241119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.241132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.245326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.439 [2024-11-19 10:14:04.245363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.439 [2024-11-19 10:14:04.245376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.439 [2024-11-19 10:14:04.249603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.440 [2024-11-19 10:14:04.249641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.440 [2024-11-19 10:14:04.249654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.440 [2024-11-19 10:14:04.253864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.440 [2024-11-19 10:14:04.253900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.440 [2024-11-19 10:14:04.253926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.440 [2024-11-19 10:14:04.258130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.440 [2024-11-19 10:14:04.258164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.440 [2024-11-19 10:14:04.258176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.440 [2024-11-19 10:14:04.262391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.440 [2024-11-19 10:14:04.262428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.440 [2024-11-19 10:14:04.262440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.440 [2024-11-19 10:14:04.266675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.440 [2024-11-19 10:14:04.266710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.440 [2024-11-19 10:14:04.266723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.440 [2024-11-19 10:14:04.270990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.440 [2024-11-19 10:14:04.271025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.440 [2024-11-19 10:14:04.271037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.440 [2024-11-19 10:14:04.275239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.440 [2024-11-19 10:14:04.275274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.440 [2024-11-19 10:14:04.275287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.440 [2024-11-19 10:14:04.279551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.440 [2024-11-19 10:14:04.279585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.440 [2024-11-19 10:14:04.279597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.440 [2024-11-19 10:14:04.283930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.440 [2024-11-19 10:14:04.283963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.440 [2024-11-19 10:14:04.283976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.440 [2024-11-19 10:14:04.288247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.440 [2024-11-19 10:14:04.288283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.440 [2024-11-19 10:14:04.288296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.440 [2024-11-19 10:14:04.292595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.440 [2024-11-19 10:14:04.292631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.440 [2024-11-19 10:14:04.292644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.440 [2024-11-19 10:14:04.296973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.440 [2024-11-19 10:14:04.297009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.440 [2024-11-19 10:14:04.297021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.440 [2024-11-19 10:14:04.301224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.440 [2024-11-19 10:14:04.301261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.440 [2024-11-19 10:14:04.301273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.440 [2024-11-19 10:14:04.305486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.440 [2024-11-19 10:14:04.305522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.440 [2024-11-19 10:14:04.305535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.440 [2024-11-19 10:14:04.309841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.440 [2024-11-19 10:14:04.309877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.440 [2024-11-19 10:14:04.309890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.440 [2024-11-19 10:14:04.314226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.440 [2024-11-19 10:14:04.314262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.440 [2024-11-19 10:14:04.314275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.440 [2024-11-19 10:14:04.318518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.440 [2024-11-19 10:14:04.318554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.440 [2024-11-19 10:14:04.318567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.440 [2024-11-19 10:14:04.322866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.440 [2024-11-19 10:14:04.322904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.440 [2024-11-19 10:14:04.322929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.701 [2024-11-19 10:14:04.327191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.701 [2024-11-19 10:14:04.327227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.701 [2024-11-19 10:14:04.327239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.701 [2024-11-19 10:14:04.331428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.701 [2024-11-19 10:14:04.331466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.701 [2024-11-19 10:14:04.331479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.701 [2024-11-19 10:14:04.335758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.701 [2024-11-19 10:14:04.335794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.701 [2024-11-19 10:14:04.335807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.701 [2024-11-19 10:14:04.340032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.701 [2024-11-19 10:14:04.340067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.701 [2024-11-19 10:14:04.340080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.701 [2024-11-19 10:14:04.344364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.701 [2024-11-19 10:14:04.344399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.701 [2024-11-19 10:14:04.344412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.701 [2024-11-19 10:14:04.348665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.701 [2024-11-19 10:14:04.348702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.701 [2024-11-19 10:14:04.348715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.701 [2024-11-19 10:14:04.353030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.701 [2024-11-19 10:14:04.353065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.701 [2024-11-19 10:14:04.353078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.701 [2024-11-19 10:14:04.357348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.701 [2024-11-19 10:14:04.357388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.701 [2024-11-19 10:14:04.357401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.701 [2024-11-19 10:14:04.361645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.701 [2024-11-19 10:14:04.361682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.701 [2024-11-19 10:14:04.361694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.701 [2024-11-19 10:14:04.366037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.701 [2024-11-19 10:14:04.366074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.701 [2024-11-19 10:14:04.366088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.701 [2024-11-19 10:14:04.370301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.701 [2024-11-19 10:14:04.370342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.701 [2024-11-19 10:14:04.370354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.701 [2024-11-19 10:14:04.374582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.701 [2024-11-19 10:14:04.374620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.701 [2024-11-19 10:14:04.374632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.701 [2024-11-19 10:14:04.378852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.701 [2024-11-19 10:14:04.378890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.701 [2024-11-19 10:14:04.378903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.701 [2024-11-19 10:14:04.383134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.701 [2024-11-19 10:14:04.383171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.701 [2024-11-19 10:14:04.383184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.701 [2024-11-19 10:14:04.387447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.701 [2024-11-19 10:14:04.387483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.387497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.391774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.391811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.391823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.396101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.396137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.396150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.400428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.400466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.400479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.404709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.404745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.404757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.408998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.409034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.409046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.413259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.413298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.413312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.417520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.417556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.417568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.421773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.421810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.421823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.426045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.426081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.426094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.430308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.430344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.430356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.434598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.434634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.434648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.438940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.438973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.438986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.443220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.443256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.443269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.447507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.447541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.447554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.451829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.451866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.451878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.456129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.456174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.456192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.460457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.460493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.460506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.464803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.464840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.464852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.469118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.469154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.469167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.473371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.473406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.473419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.477697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.477733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.477746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.482032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.482067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.482080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.486290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.486324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.486338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.490607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.490644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.490657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.494947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.494993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.495006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.499261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.499297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.702 [2024-11-19 10:14:04.499310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.702 [2024-11-19 10:14:04.503594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.702 [2024-11-19 10:14:04.503630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.703 [2024-11-19 10:14:04.503642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.703 [2024-11-19 10:14:04.507981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.703 [2024-11-19 10:14:04.508016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.703 [2024-11-19 10:14:04.508029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.703 [2024-11-19 10:14:04.512329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.703 [2024-11-19 10:14:04.512364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.703 [2024-11-19 10:14:04.512376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.703 [2024-11-19 10:14:04.516586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.703 [2024-11-19 10:14:04.516621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.703 [2024-11-19 10:14:04.516634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.703 [2024-11-19 10:14:04.520891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.703 [2024-11-19 10:14:04.520941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.703 [2024-11-19 10:14:04.520954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.703 [2024-11-19 10:14:04.525225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.703 [2024-11-19 10:14:04.525262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.703 [2024-11-19 10:14:04.525274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.703 [2024-11-19 10:14:04.529520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.703 [2024-11-19 10:14:04.529558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.703 [2024-11-19 10:14:04.529571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.703 [2024-11-19 10:14:04.533801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.703 [2024-11-19 10:14:04.533838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.703 [2024-11-19 10:14:04.533851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.703 [2024-11-19 10:14:04.538070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.703 [2024-11-19 10:14:04.538105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.703 [2024-11-19 10:14:04.538118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.703 [2024-11-19 10:14:04.542350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.703 [2024-11-19 10:14:04.542385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.703 [2024-11-19 10:14:04.542398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.703 [2024-11-19 10:14:04.546617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.703 [2024-11-19 10:14:04.546657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.703 [2024-11-19 10:14:04.546670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.703 [2024-11-19 10:14:04.550953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.703 [2024-11-19 10:14:04.550990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.703 [2024-11-19 10:14:04.551002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.703 [2024-11-19 10:14:04.555164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.703 [2024-11-19 10:14:04.555200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.703 [2024-11-19 10:14:04.555213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.703 [2024-11-19 10:14:04.559441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.703 [2024-11-19 10:14:04.559477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.703 [2024-11-19 10:14:04.559489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.703 [2024-11-19 10:14:04.563652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.703 [2024-11-19 10:14:04.563688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.703 [2024-11-19 10:14:04.563700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.703 [2024-11-19 10:14:04.567954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.703 [2024-11-19 10:14:04.567994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.703 [2024-11-19 10:14:04.568008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.703 [2024-11-19 10:14:04.572252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.703 [2024-11-19 10:14:04.572293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.703 [2024-11-19 10:14:04.572306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.703 [2024-11-19 10:14:04.576544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.703 [2024-11-19 10:14:04.576593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.703 [2024-11-19 10:14:04.576607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.703 [2024-11-19 10:14:04.580839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.703 [2024-11-19 10:14:04.580882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.703 [2024-11-19 10:14:04.580895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.703 [2024-11-19 10:14:04.585124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.703 [2024-11-19 10:14:04.585164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.703 [2024-11-19 10:14:04.585178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.963 [2024-11-19 10:14:04.589379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.963 [2024-11-19 10:14:04.589420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.963 [2024-11-19 10:14:04.589434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.963 [2024-11-19 10:14:04.593654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.963 [2024-11-19 10:14:04.593698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.963 [2024-11-19 10:14:04.593713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.963 [2024-11-19 10:14:04.598095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.963 [2024-11-19 10:14:04.598134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.963 [2024-11-19 10:14:04.598149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.963 [2024-11-19 10:14:04.602415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.963 [2024-11-19 10:14:04.602455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.963 [2024-11-19 10:14:04.602468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.963 [2024-11-19 10:14:04.606746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.963 [2024-11-19 10:14:04.606786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.963 [2024-11-19 10:14:04.606800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.963 [2024-11-19 10:14:04.611065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.963 [2024-11-19 10:14:04.611104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.963 [2024-11-19 10:14:04.611118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.963 [2024-11-19 10:14:04.615324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.963 [2024-11-19 10:14:04.615363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.963 [2024-11-19 10:14:04.615377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.963 [2024-11-19 10:14:04.619606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.963 [2024-11-19 10:14:04.619651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.963 [2024-11-19 10:14:04.619666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.963 [2024-11-19 10:14:04.623948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.963 [2024-11-19 10:14:04.623986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.963 [2024-11-19 10:14:04.624000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.963 [2024-11-19 10:14:04.628261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.963 [2024-11-19 10:14:04.628300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.963 [2024-11-19 10:14:04.628314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.963 [2024-11-19 10:14:04.632559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.963 [2024-11-19 10:14:04.632600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.963 [2024-11-19 10:14:04.632613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.963 [2024-11-19 10:14:04.636897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.963 [2024-11-19 10:14:04.636956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.963 [2024-11-19 10:14:04.636970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.963 [2024-11-19 10:14:04.641198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.963 [2024-11-19 10:14:04.641238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.963 [2024-11-19 10:14:04.641252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.963 [2024-11-19 10:14:04.645519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.963 [2024-11-19 10:14:04.645559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.963 [2024-11-19 10:14:04.645573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.963 [2024-11-19 10:14:04.649814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.963 [2024-11-19 10:14:04.649854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.963 [2024-11-19 10:14:04.649868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.963 [2024-11-19 10:14:04.654101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.963 [2024-11-19 10:14:04.654141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.963 [2024-11-19 10:14:04.654155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.963 [2024-11-19 10:14:04.658402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.963 [2024-11-19 10:14:04.658444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.963 [2024-11-19 10:14:04.658459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.963 [2024-11-19 10:14:04.662700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.963 [2024-11-19 10:14:04.662740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.963 [2024-11-19 10:14:04.662754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.963 [2024-11-19 10:14:04.666999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.963 [2024-11-19 10:14:04.667059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.963 [2024-11-19 10:14:04.667074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.963 [2024-11-19 10:14:04.671292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.963 [2024-11-19 10:14:04.671331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.963 [2024-11-19 10:14:04.671345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.963 [2024-11-19 10:14:04.675603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.675642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.675656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.679895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.679947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.679961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.684120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.684170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.684186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.688428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.688467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.688482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.692713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.692754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.692768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.697009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.697049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.697063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.701253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.701293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.701307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.705547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.705588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.705602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.709853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.709897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.709931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.714181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.714222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.714236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.718462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.718503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.718518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.722828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.722868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.722882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.727186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.727227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.727241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.731548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.731588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.731603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.735938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.735982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.735996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.740183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.740222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.740235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.744509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.744550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.744564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.748805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.748846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.748860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.753138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.753178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.753192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.757463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.757505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.757519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.761821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.761862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.761876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.766166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.766208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.766223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.770467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.770508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.770522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.774807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.774849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.774864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.779125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.779164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.779178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.783426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.783469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.783483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.787762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.787803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.787817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.792143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.792195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.792210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.796519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.796558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.796572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.800815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.800855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.800869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.805133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.805174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.805188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.809445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.809488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.809503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.813753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.813794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.813810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.818050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.818090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.818105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.822275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.822316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.822330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.826522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.826553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.826565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.830868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.830907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.830940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.835149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.835191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.835206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.839402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.839442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.839456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.843709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.843749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.843763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.964 [2024-11-19 10:14:04.847988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:50.964 [2024-11-19 10:14:04.848028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.964 [2024-11-19 10:14:04.848041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.223 [2024-11-19 10:14:04.852283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.223 [2024-11-19 10:14:04.852322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.223 [2024-11-19 10:14:04.852336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.223 [2024-11-19 10:14:04.856599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.223 [2024-11-19 10:14:04.856641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.223 [2024-11-19 10:14:04.856655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:51.223 [2024-11-19 10:14:04.860828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.223 [2024-11-19 10:14:04.860869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.223 [2024-11-19 10:14:04.860883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.223 [2024-11-19 10:14:04.865139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.223 [2024-11-19 10:14:04.865180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.223 [2024-11-19 10:14:04.865195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.223 [2024-11-19 10:14:04.869434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.223 [2024-11-19 10:14:04.869475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.223 [2024-11-19 10:14:04.869488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.223 [2024-11-19 10:14:04.873795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.223 [2024-11-19 10:14:04.873835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.873849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.878203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.878243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.878256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.882595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.882641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.882655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.886902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.886955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.886970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.891166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.891206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.891219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.895462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.895502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.895516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.899794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.899836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.899850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.904079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.904118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.904131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.908379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.908420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.908434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.912636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.912677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.912691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.917009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.917049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.917063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.921328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.921368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.921382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.925581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.925622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.925636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.929860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.929902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.929938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.934184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.934223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.934236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.938451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.938493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.938507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.942805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.942845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.942860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.947188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.947228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.947242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.951463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.951503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.951518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.955768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.955807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.955821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.960062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.960102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.960115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.964332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.964373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.964387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.968558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.968600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.968614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.972845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.972887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.972901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.977185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.977225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.977238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.981561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.981602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.981616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.985887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.985943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.224 [2024-11-19 10:14:04.985958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.224 [2024-11-19 10:14:04.990183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.224 [2024-11-19 10:14:04.990221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:04.990235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:04.994453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:04.994492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:04.994506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:04.998731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:04.998771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:04.998785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.003029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.003067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.003081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.007316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.007355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.007369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.011622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.011662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.011676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.015949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.015988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.016002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.020245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.020283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.020297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.024516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.024556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.024569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.028741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.028779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.028793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.033053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.033092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.033106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.037380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.037418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.037432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.041679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.041719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.041733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.045999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.046040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.046054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.050343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.050381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.050395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.054654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.054700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.054713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.059009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.059049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.059062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.063325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.063366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.063379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.067678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.067717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.067731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.071891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.071941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.071956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.076173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.076212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.076225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.080515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.080554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.080568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.084870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.084910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.084940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.089197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.089236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.089250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.093506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.093545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.093559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.097856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.097895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.097909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.102146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.102186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.225 [2024-11-19 10:14:05.102200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.225 [2024-11-19 10:14:05.106421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.225 [2024-11-19 10:14:05.106461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.226 [2024-11-19 10:14:05.106475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.484 [2024-11-19 10:14:05.110749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.484 [2024-11-19 10:14:05.110791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.484 [2024-11-19 10:14:05.110805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.484 [2024-11-19 10:14:05.115050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.484 [2024-11-19 10:14:05.115089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.484 [2024-11-19 10:14:05.115103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:51.484 [2024-11-19 10:14:05.119331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.484 [2024-11-19 10:14:05.119370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.484 [2024-11-19 10:14:05.119384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.484 [2024-11-19 10:14:05.123676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.484 [2024-11-19 10:14:05.123718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.484 [2024-11-19 10:14:05.123732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.484 [2024-11-19 10:14:05.128040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.484 [2024-11-19 10:14:05.128081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.484 [2024-11-19 10:14:05.128094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.484 [2024-11-19 10:14:05.132368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.485 [2024-11-19 10:14:05.132411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.485 [2024-11-19 10:14:05.132424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:51.485 [2024-11-19 10:14:05.136765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.485 [2024-11-19 10:14:05.136807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.485 [2024-11-19 10:14:05.136822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.485 [2024-11-19 10:14:05.141114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.485 [2024-11-19 10:14:05.141152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.485 [2024-11-19 10:14:05.141166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.485 [2024-11-19 10:14:05.145418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.485 [2024-11-19 10:14:05.145458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.485 [2024-11-19 10:14:05.145487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.485 [2024-11-19 10:14:05.149712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.485 [2024-11-19 10:14:05.149753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.485 [2024-11-19 10:14:05.149767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:51.485 [2024-11-19 10:14:05.154044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.485 [2024-11-19 10:14:05.154086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.485 [2024-11-19 10:14:05.154100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.485 [2024-11-19 10:14:05.158295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.485 [2024-11-19 10:14:05.158336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.485 [2024-11-19 10:14:05.158350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.485 [2024-11-19 10:14:05.162648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.485 [2024-11-19 10:14:05.162689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.485 [2024-11-19 10:14:05.162704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.485 [2024-11-19 10:14:05.166964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.485 [2024-11-19 10:14:05.167004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.485 [2024-11-19 10:14:05.167018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:51.485 [2024-11-19 10:14:05.171257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.485 [2024-11-19 10:14:05.171297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.485 [2024-11-19 10:14:05.171326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.485 [2024-11-19 10:14:05.175606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.485 [2024-11-19 10:14:05.175646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.485 [2024-11-19 10:14:05.175660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.485 [2024-11-19 10:14:05.179924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.485 [2024-11-19 10:14:05.179961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.485 [2024-11-19 10:14:05.179975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.485 [2024-11-19 10:14:05.184240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.485 [2024-11-19 10:14:05.184281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.485 [2024-11-19 10:14:05.184296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:51.485 [2024-11-19 10:14:05.188467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.485 [2024-11-19 10:14:05.188507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.485 [2024-11-19 10:14:05.188521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.485 [2024-11-19 10:14:05.192822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.485 [2024-11-19 10:14:05.192863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.485 [2024-11-19 10:14:05.192877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:51.485 7207.50 IOPS, 900.94 MiB/s [2024-11-19T10:14:05.374Z] [2024-11-19 10:14:05.198481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xef0400) 00:19:51.485 [2024-11-19 10:14:05.198523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.485 [2024-11-19 10:14:05.198537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.485 00:19:51.485 Latency(us) 00:19:51.485 [2024-11-19T10:14:05.374Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.485 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:51.485 nvme0n1 : 2.00 7203.51 900.44 0.00 0.00 2217.47 1980.97 5391.83 00:19:51.485 [2024-11-19T10:14:05.374Z] =================================================================================================================== 00:19:51.485 [2024-11-19T10:14:05.374Z] Total : 7203.51 900.44 0.00 0.00 2217.47 1980.97 5391.83 00:19:51.485 { 00:19:51.485 "results": [ 00:19:51.485 { 00:19:51.485 "job": "nvme0n1", 00:19:51.485 "core_mask": "0x2", 00:19:51.485 "workload": "randread", 00:19:51.485 "status": "finished", 00:19:51.485 "queue_depth": 16, 00:19:51.485 "io_size": 131072, 00:19:51.485 "runtime": 2.003328, 00:19:51.485 "iops": 7203.513353779311, 00:19:51.485 "mibps": 900.4391692224139, 00:19:51.485 "io_failed": 0, 00:19:51.485 "io_timeout": 0, 00:19:51.485 "avg_latency_us": 2217.471254937288, 00:19:51.485 "min_latency_us": 1980.9745454545455, 00:19:51.485 "max_latency_us": 5391.825454545455 00:19:51.485 } 00:19:51.485 ], 00:19:51.485 "core_count": 1 00:19:51.485 } 00:19:51.485 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:51.485 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:51.485 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:51.485 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:51.485 | .driver_specific 00:19:51.485 | .nvme_error 00:19:51.485 | .status_code 00:19:51.485 | .command_transient_transport_error' 00:19:51.744 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 466 > 0 )) 00:19:51.744 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80380 00:19:51.744 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80380 ']' 00:19:51.744 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80380 00:19:51.744 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:51.744 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.744 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80380 00:19:51.744 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:51.744 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:51.744 killing process with pid 80380 00:19:51.744 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80380' 00:19:51.744 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80380 00:19:51.744 Received shutdown signal, test time was about 2.000000 seconds 00:19:51.744 00:19:51.744 Latency(us) 00:19:51.744 [2024-11-19T10:14:05.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.744 [2024-11-19T10:14:05.633Z] =================================================================================================================== 00:19:51.744 [2024-11-19T10:14:05.633Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:51.744 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80380 00:19:52.002 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:19:52.002 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:52.002 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:52.002 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:52.002 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:52.002 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80433 00:19:52.002 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:19:52.002 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80433 /var/tmp/bperf.sock 00:19:52.002 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80433 ']' 00:19:52.002 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:52.002 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.002 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:52.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:52.002 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.002 10:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:52.002 [2024-11-19 10:14:05.761184] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:19:52.002 [2024-11-19 10:14:05.761462] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80433 ] 00:19:52.261 [2024-11-19 10:14:05.906885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.261 [2024-11-19 10:14:05.967965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.261 [2024-11-19 10:14:06.021170] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:53.196 10:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.196 10:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:53.196 10:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:53.196 10:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:53.196 10:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:53.196 10:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.196 10:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:53.196 10:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.196 10:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:53.196 10:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:53.763 nvme0n1 00:19:53.763 10:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:53.763 10:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.763 10:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:53.763 10:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.763 10:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:53.763 10:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:53.763 Running I/O for 2 seconds... 00:19:53.763 [2024-11-19 10:14:07.554672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166fc560 00:19:53.763 [2024-11-19 10:14:07.556283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.763 [2024-11-19 10:14:07.556330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:53.763 [2024-11-19 10:14:07.571296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166fcdd0 00:19:53.763 [2024-11-19 10:14:07.572751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.763 [2024-11-19 10:14:07.572790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.763 [2024-11-19 10:14:07.587832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166fd640 00:19:53.763 [2024-11-19 10:14:07.589291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.763 [2024-11-19 10:14:07.589358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:53.763 [2024-11-19 10:14:07.604311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166fdeb0 00:19:53.763 [2024-11-19 10:14:07.605850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.763 [2024-11-19 10:14:07.605897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:53.763 [2024-11-19 10:14:07.621167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166fe720 00:19:53.763 [2024-11-19 10:14:07.622517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.763 [2024-11-19 10:14:07.622586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:53.763 [2024-11-19 10:14:07.637122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166ff3c8 00:19:53.763 [2024-11-19 10:14:07.638487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.763 [2024-11-19 10:14:07.638525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:54.021 [2024-11-19 10:14:07.660977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166ff3c8 00:19:54.021 [2024-11-19 10:14:07.663558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.021 [2024-11-19 10:14:07.663729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:54.022 [2024-11-19 10:14:07.677609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166fe720 00:19:54.022 [2024-11-19 10:14:07.680368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.022 [2024-11-19 10:14:07.680547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:54.022 [2024-11-19 10:14:07.694450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166fdeb0 00:19:54.022 [2024-11-19 10:14:07.697209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.022 [2024-11-19 10:14:07.697434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:54.022 [2024-11-19 10:14:07.711406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166fd640 00:19:54.022 [2024-11-19 10:14:07.714247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.022 [2024-11-19 10:14:07.714449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:54.022 [2024-11-19 10:14:07.728140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166fcdd0 00:19:54.022 [2024-11-19 10:14:07.730784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.022 [2024-11-19 10:14:07.730978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:54.022 [2024-11-19 10:14:07.744744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166fc560 00:19:54.022 [2024-11-19 10:14:07.747385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.022 [2024-11-19 10:14:07.747557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:54.022 [2024-11-19 10:14:07.761255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166fbcf0 00:19:54.022 [2024-11-19 10:14:07.763803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.022 [2024-11-19 10:14:07.763987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:54.022 [2024-11-19 10:14:07.777642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166fb480 00:19:54.022 [2024-11-19 10:14:07.780207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.022 [2024-11-19 10:14:07.780386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:54.022 [2024-11-19 10:14:07.793990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166fac10 00:19:54.022 [2024-11-19 10:14:07.796534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.022 [2024-11-19 10:14:07.796574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:54.022 [2024-11-19 10:14:07.810038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166fa3a0 00:19:54.022 [2024-11-19 10:14:07.812414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.022 [2024-11-19 10:14:07.812455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:54.022 [2024-11-19 10:14:07.826562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f9b30 00:19:54.022 [2024-11-19 10:14:07.829009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.022 [2024-11-19 10:14:07.829051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.022 [2024-11-19 10:14:07.843045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f92c0 00:19:54.022 [2024-11-19 10:14:07.845356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.022 [2024-11-19 10:14:07.845396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:54.022 [2024-11-19 10:14:07.859289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f8a50 00:19:54.022 [2024-11-19 10:14:07.861629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.022 [2024-11-19 10:14:07.861666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:54.022 [2024-11-19 10:14:07.875697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f81e0 00:19:54.022 [2024-11-19 10:14:07.878045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.022 [2024-11-19 10:14:07.878080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:54.022 [2024-11-19 10:14:07.892033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f7970 00:19:54.022 [2024-11-19 10:14:07.894273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.022 [2024-11-19 10:14:07.894309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:54.022 [2024-11-19 10:14:07.908253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f7100 00:19:54.281 [2024-11-19 10:14:07.910697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.281 [2024-11-19 10:14:07.910869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:54.281 [2024-11-19 10:14:07.925065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f6890 00:19:54.281 [2024-11-19 10:14:07.927341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.281 [2024-11-19 10:14:07.927379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:54.281 [2024-11-19 10:14:07.941391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f6020 00:19:54.281 [2024-11-19 10:14:07.943622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.281 [2024-11-19 10:14:07.943659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:54.281 [2024-11-19 10:14:07.957647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f57b0 00:19:54.281 [2024-11-19 10:14:07.959805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.281 [2024-11-19 10:14:07.959976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:54.281 [2024-11-19 10:14:07.973994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f4f40 00:19:54.281 [2024-11-19 10:14:07.976313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.281 [2024-11-19 10:14:07.976525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:54.281 [2024-11-19 10:14:07.990442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f46d0 00:19:54.281 [2024-11-19 10:14:07.992725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.281 [2024-11-19 10:14:07.992902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:54.281 [2024-11-19 10:14:08.006795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f3e60 00:19:54.281 [2024-11-19 10:14:08.009073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.281 [2024-11-19 10:14:08.009255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:54.281 [2024-11-19 10:14:08.023246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f35f0 00:19:54.281 [2024-11-19 10:14:08.025489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.281 [2024-11-19 10:14:08.025669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:54.281 [2024-11-19 10:14:08.039609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f2d80 00:19:54.281 [2024-11-19 10:14:08.041833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.281 [2024-11-19 10:14:08.042019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:54.281 [2024-11-19 10:14:08.055991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f2510 00:19:54.281 [2024-11-19 10:14:08.058190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.281 [2024-11-19 10:14:08.058358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:54.281 [2024-11-19 10:14:08.072609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f1ca0 00:19:54.281 [2024-11-19 10:14:08.074958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.281 [2024-11-19 10:14:08.075141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:54.281 [2024-11-19 10:14:08.089816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f1430 00:19:54.281 [2024-11-19 10:14:08.092099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.281 [2024-11-19 10:14:08.092295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.281 [2024-11-19 10:14:08.106394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f0bc0 00:19:54.282 [2024-11-19 10:14:08.108536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.282 [2024-11-19 10:14:08.108705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:54.282 [2024-11-19 10:14:08.122801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f0350 00:19:54.282 [2024-11-19 10:14:08.124935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.282 [2024-11-19 10:14:08.125172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:54.282 [2024-11-19 10:14:08.139297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166efae0 00:19:54.282 [2024-11-19 10:14:08.141400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.282 [2024-11-19 10:14:08.141438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:54.282 [2024-11-19 10:14:08.155480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166ef270 00:19:54.282 [2024-11-19 10:14:08.157461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.282 [2024-11-19 10:14:08.157502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:54.541 [2024-11-19 10:14:08.171794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166eea00 00:19:54.541 [2024-11-19 10:14:08.173775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.541 [2024-11-19 10:14:08.173821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:54.541 [2024-11-19 10:14:08.188081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166ee190 00:19:54.541 [2024-11-19 10:14:08.190032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.541 [2024-11-19 10:14:08.190076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:54.541 [2024-11-19 10:14:08.204215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166ed920 00:19:54.541 [2024-11-19 10:14:08.206116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.541 [2024-11-19 10:14:08.206152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:54.541 [2024-11-19 10:14:08.220224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166ed0b0 00:19:54.541 [2024-11-19 10:14:08.222069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.541 [2024-11-19 10:14:08.222104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:54.541 [2024-11-19 10:14:08.236152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166ec840 00:19:54.541 [2024-11-19 10:14:08.237989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.541 [2024-11-19 10:14:08.238024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:54.541 [2024-11-19 10:14:08.252093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166ebfd0 00:19:54.541 [2024-11-19 10:14:08.253895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.541 [2024-11-19 10:14:08.253945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:54.541 [2024-11-19 10:14:08.267999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166eb760 00:19:54.541 [2024-11-19 10:14:08.269801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.541 [2024-11-19 10:14:08.269837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:54.541 [2024-11-19 10:14:08.284185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166eaef0 00:19:54.541 [2024-11-19 10:14:08.285999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.541 [2024-11-19 10:14:08.286038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:54.541 [2024-11-19 10:14:08.300309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166ea680 00:19:54.541 [2024-11-19 10:14:08.302074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.541 [2024-11-19 10:14:08.302112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:54.541 [2024-11-19 10:14:08.316337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e9e10 00:19:54.541 [2024-11-19 10:14:08.318123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.541 [2024-11-19 10:14:08.318158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:54.541 [2024-11-19 10:14:08.332371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e95a0 00:19:54.541 [2024-11-19 10:14:08.334094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.541 [2024-11-19 10:14:08.334252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:54.541 [2024-11-19 10:14:08.348589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e8d30 00:19:54.541 [2024-11-19 10:14:08.350325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.541 [2024-11-19 10:14:08.350365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.541 [2024-11-19 10:14:08.364781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e84c0 00:19:54.541 [2024-11-19 10:14:08.366505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.541 [2024-11-19 10:14:08.366545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:54.541 [2024-11-19 10:14:08.381113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e7c50 00:19:54.541 [2024-11-19 10:14:08.382781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.541 [2024-11-19 10:14:08.382819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:54.541 [2024-11-19 10:14:08.397465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e73e0 00:19:54.541 [2024-11-19 10:14:08.399147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.541 [2024-11-19 10:14:08.399304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:54.541 [2024-11-19 10:14:08.413790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e6b70 00:19:54.542 [2024-11-19 10:14:08.415431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.542 [2024-11-19 10:14:08.415468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:54.800 [2024-11-19 10:14:08.429792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e6300 00:19:54.800 [2024-11-19 10:14:08.431549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.800 [2024-11-19 10:14:08.431580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:54.800 [2024-11-19 10:14:08.446000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e5a90 00:19:54.800 [2024-11-19 10:14:08.447579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.800 [2024-11-19 10:14:08.447615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:54.800 [2024-11-19 10:14:08.462035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e5220 00:19:54.800 [2024-11-19 10:14:08.463589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.800 [2024-11-19 10:14:08.463625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:54.800 [2024-11-19 10:14:08.478007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e49b0 00:19:54.800 [2024-11-19 10:14:08.479541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.800 [2024-11-19 10:14:08.479578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:54.800 [2024-11-19 10:14:08.494053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e4140 00:19:54.800 [2024-11-19 10:14:08.495586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.800 [2024-11-19 10:14:08.495625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:54.801 [2024-11-19 10:14:08.510116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e38d0 00:19:54.801 [2024-11-19 10:14:08.511628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.801 [2024-11-19 10:14:08.511666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:54.801 [2024-11-19 10:14:08.526216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e3060 00:19:54.801 [2024-11-19 10:14:08.527706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.801 [2024-11-19 10:14:08.527743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:54.801 15435.00 IOPS, 60.29 MiB/s [2024-11-19T10:14:08.690Z] [2024-11-19 10:14:08.542276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e27f0 00:19:54.801 [2024-11-19 10:14:08.543747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.801 [2024-11-19 10:14:08.543789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:54.801 [2024-11-19 10:14:08.558491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e1f80 00:19:54.801 [2024-11-19 10:14:08.559992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.801 [2024-11-19 10:14:08.560153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:54.801 [2024-11-19 10:14:08.575046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e1710 00:19:54.801 [2024-11-19 10:14:08.576540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.801 [2024-11-19 10:14:08.576584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:54.801 [2024-11-19 10:14:08.591594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e0ea0 00:19:54.801 [2024-11-19 10:14:08.593084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.801 [2024-11-19 10:14:08.593124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:54.801 [2024-11-19 10:14:08.607893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e0630 00:19:54.801 [2024-11-19 10:14:08.609329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.801 [2024-11-19 10:14:08.609368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.801 [2024-11-19 10:14:08.623884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166dfdc0 00:19:54.801 [2024-11-19 10:14:08.625279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.801 [2024-11-19 10:14:08.625317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:54.801 [2024-11-19 10:14:08.640032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166df550 00:19:54.801 [2024-11-19 10:14:08.641651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.801 [2024-11-19 10:14:08.641687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:54.801 [2024-11-19 10:14:08.656318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166dece0 00:19:54.801 [2024-11-19 10:14:08.657646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.801 [2024-11-19 10:14:08.657681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:54.801 [2024-11-19 10:14:08.672300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166de470 00:19:54.801 [2024-11-19 10:14:08.673606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.801 [2024-11-19 10:14:08.673642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:55.058 [2024-11-19 10:14:08.694940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166ddc00 00:19:55.059 [2024-11-19 10:14:08.697460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.059 [2024-11-19 10:14:08.697497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:55.059 [2024-11-19 10:14:08.710863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166de470 00:19:55.059 [2024-11-19 10:14:08.713510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.059 [2024-11-19 10:14:08.713543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:55.059 [2024-11-19 10:14:08.727123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166dece0 00:19:55.059 [2024-11-19 10:14:08.729677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.059 [2024-11-19 10:14:08.729713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:55.059 [2024-11-19 10:14:08.743185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166df550 00:19:55.059 [2024-11-19 10:14:08.745647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.059 [2024-11-19 10:14:08.745683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:55.059 [2024-11-19 10:14:08.759112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166dfdc0 00:19:55.059 [2024-11-19 10:14:08.761553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.059 [2024-11-19 10:14:08.761589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:55.059 [2024-11-19 10:14:08.775064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e0630 00:19:55.059 [2024-11-19 10:14:08.777492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.059 [2024-11-19 10:14:08.777529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:55.059 [2024-11-19 10:14:08.791186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e0ea0 00:19:55.059 [2024-11-19 10:14:08.793619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.059 [2024-11-19 10:14:08.793660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:55.059 [2024-11-19 10:14:08.807220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e1710 00:19:55.059 [2024-11-19 10:14:08.809620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.059 [2024-11-19 10:14:08.809661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:55.059 [2024-11-19 10:14:08.823730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e1f80 00:19:55.059 [2024-11-19 10:14:08.826459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.059 [2024-11-19 10:14:08.826494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:55.059 [2024-11-19 10:14:08.840895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e27f0 00:19:55.059 [2024-11-19 10:14:08.843341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.059 [2024-11-19 10:14:08.843379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:55.059 [2024-11-19 10:14:08.857190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e3060 00:19:55.059 [2024-11-19 10:14:08.859506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.059 [2024-11-19 10:14:08.859543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.059 [2024-11-19 10:14:08.873249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e38d0 00:19:55.059 [2024-11-19 10:14:08.875548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.059 [2024-11-19 10:14:08.875584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:55.059 [2024-11-19 10:14:08.889195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e4140 00:19:55.059 [2024-11-19 10:14:08.891467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.059 [2024-11-19 10:14:08.891503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:55.059 [2024-11-19 10:14:08.905166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e49b0 00:19:55.059 [2024-11-19 10:14:08.907427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.059 [2024-11-19 10:14:08.907463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:55.059 [2024-11-19 10:14:08.921247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e5220 00:19:55.059 [2024-11-19 10:14:08.923497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.059 [2024-11-19 10:14:08.923534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:55.059 [2024-11-19 10:14:08.937204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e5a90 00:19:55.059 [2024-11-19 10:14:08.939420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.059 [2024-11-19 10:14:08.939455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:55.317 [2024-11-19 10:14:08.953160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e6300 00:19:55.317 [2024-11-19 10:14:08.955357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.317 [2024-11-19 10:14:08.955392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:55.317 [2024-11-19 10:14:08.969154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e6b70 00:19:55.317 [2024-11-19 10:14:08.971372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.317 [2024-11-19 10:14:08.971411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:55.317 [2024-11-19 10:14:08.985597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e73e0 00:19:55.317 [2024-11-19 10:14:08.987875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.317 [2024-11-19 10:14:08.987929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:55.317 [2024-11-19 10:14:09.002033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e7c50 00:19:55.317 [2024-11-19 10:14:09.004266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.317 [2024-11-19 10:14:09.004308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:55.317 [2024-11-19 10:14:09.018012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e84c0 00:19:55.317 [2024-11-19 10:14:09.020138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.317 [2024-11-19 10:14:09.020179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:55.317 [2024-11-19 10:14:09.033924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e8d30 00:19:55.317 [2024-11-19 10:14:09.036061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.317 [2024-11-19 10:14:09.036099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:55.317 [2024-11-19 10:14:09.050000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e95a0 00:19:55.317 [2024-11-19 10:14:09.052069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.317 [2024-11-19 10:14:09.052104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:55.317 [2024-11-19 10:14:09.066021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166e9e10 00:19:55.317 [2024-11-19 10:14:09.068087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.317 [2024-11-19 10:14:09.068123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:55.317 [2024-11-19 10:14:09.081876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166ea680 00:19:55.317 [2024-11-19 10:14:09.083944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.317 [2024-11-19 10:14:09.083978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:55.317 [2024-11-19 10:14:09.097958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166eaef0 00:19:55.317 [2024-11-19 10:14:09.100166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.317 [2024-11-19 10:14:09.100202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:55.317 [2024-11-19 10:14:09.114392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166eb760 00:19:55.317 [2024-11-19 10:14:09.116464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.317 [2024-11-19 10:14:09.116515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.317 [2024-11-19 10:14:09.130683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166ebfd0 00:19:55.317 [2024-11-19 10:14:09.132726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.317 [2024-11-19 10:14:09.132766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:55.317 [2024-11-19 10:14:09.146768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166ec840 00:19:55.317 [2024-11-19 10:14:09.148757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.317 [2024-11-19 10:14:09.148795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:55.317 [2024-11-19 10:14:09.162789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166ed0b0 00:19:55.317 [2024-11-19 10:14:09.164765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.317 [2024-11-19 10:14:09.164800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:55.317 [2024-11-19 10:14:09.178780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166ed920 00:19:55.317 [2024-11-19 10:14:09.180738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.317 [2024-11-19 10:14:09.180775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:55.317 [2024-11-19 10:14:09.194737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166ee190 00:19:55.317 [2024-11-19 10:14:09.196683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.318 [2024-11-19 10:14:09.196722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 10:14:09.210740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166eea00 00:19:55.574 [2024-11-19 10:14:09.212663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.574 [2024-11-19 10:14:09.212701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 10:14:09.226773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166ef270 00:19:55.575 [2024-11-19 10:14:09.228674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.575 [2024-11-19 10:14:09.228712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 10:14:09.242736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166efae0 00:19:55.575 [2024-11-19 10:14:09.244622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.575 [2024-11-19 10:14:09.244659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 10:14:09.258699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f0350 00:19:55.575 [2024-11-19 10:14:09.260554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.575 [2024-11-19 10:14:09.260591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 10:14:09.274638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f0bc0 00:19:55.575 [2024-11-19 10:14:09.276464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.575 [2024-11-19 10:14:09.276502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 10:14:09.290575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f1430 00:19:55.575 [2024-11-19 10:14:09.292391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.575 [2024-11-19 10:14:09.292428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 10:14:09.306516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f1ca0 00:19:55.575 [2024-11-19 10:14:09.308301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.575 [2024-11-19 10:14:09.308338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 10:14:09.322443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f2510 00:19:55.575 [2024-11-19 10:14:09.324218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.575 [2024-11-19 10:14:09.324255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 10:14:09.338389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f2d80 00:19:55.575 [2024-11-19 10:14:09.340138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.575 [2024-11-19 10:14:09.340181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 10:14:09.354343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f35f0 00:19:55.575 [2024-11-19 10:14:09.356069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.575 [2024-11-19 10:14:09.356102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 10:14:09.370283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f3e60 00:19:55.575 [2024-11-19 10:14:09.371979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.575 [2024-11-19 10:14:09.372013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 10:14:09.386166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f46d0 00:19:55.575 [2024-11-19 10:14:09.387836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.575 [2024-11-19 10:14:09.387873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 10:14:09.402078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f4f40 00:19:55.575 [2024-11-19 10:14:09.403732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.575 [2024-11-19 10:14:09.403768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 10:14:09.418007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f57b0 00:19:55.575 [2024-11-19 10:14:09.419639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.575 [2024-11-19 10:14:09.419674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 10:14:09.433908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f6020 00:19:55.575 [2024-11-19 10:14:09.435535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.575 [2024-11-19 10:14:09.435570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 10:14:09.449831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f6890 00:19:55.575 [2024-11-19 10:14:09.451446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.575 [2024-11-19 10:14:09.451483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:55.833 [2024-11-19 10:14:09.465756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f7100 00:19:55.833 [2024-11-19 10:14:09.467351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.833 [2024-11-19 10:14:09.467387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:55.833 [2024-11-19 10:14:09.481671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f7970 00:19:55.833 [2024-11-19 10:14:09.483247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.833 [2024-11-19 10:14:09.483282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:55.833 [2024-11-19 10:14:09.497589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f81e0 00:19:55.833 [2024-11-19 10:14:09.499141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.833 [2024-11-19 10:14:09.499175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:55.833 [2024-11-19 10:14:09.513631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f8a50 00:19:55.833 [2024-11-19 10:14:09.515170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.833 [2024-11-19 10:14:09.515207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:55.833 [2024-11-19 10:14:09.529628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f92c0 00:19:55.833 [2024-11-19 10:14:09.531159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.833 [2024-11-19 10:14:09.531196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:55.833 15624.00 IOPS, 61.03 MiB/s [2024-11-19T10:14:09.722Z] [2024-11-19 10:14:09.547399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be5b0) with pdu=0x2000166f9b30 00:19:55.833 [2024-11-19 10:14:09.548956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.833 [2024-11-19 10:14:09.548998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:55.833 00:19:55.833 Latency(us) 00:19:55.833 [2024-11-19T10:14:09.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.833 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:55.833 nvme0n1 : 2.01 15607.69 60.97 0.00 0.00 8194.04 2502.28 30504.03 00:19:55.833 [2024-11-19T10:14:09.722Z] =================================================================================================================== 00:19:55.833 [2024-11-19T10:14:09.722Z] Total : 15607.69 60.97 0.00 0.00 8194.04 2502.28 30504.03 00:19:55.833 { 00:19:55.833 "results": [ 00:19:55.833 { 00:19:55.833 "job": "nvme0n1", 00:19:55.833 "core_mask": "0x2", 00:19:55.833 "workload": "randwrite", 00:19:55.833 "status": "finished", 00:19:55.833 "queue_depth": 128, 00:19:55.833 "io_size": 4096, 00:19:55.833 "runtime": 2.010291, 00:19:55.833 "iops": 15607.690627874274, 00:19:55.833 "mibps": 60.96754151513388, 00:19:55.833 "io_failed": 0, 00:19:55.833 "io_timeout": 0, 00:19:55.833 "avg_latency_us": 8194.036749339391, 00:19:55.833 "min_latency_us": 2502.2836363636366, 00:19:55.833 "max_latency_us": 30504.02909090909 00:19:55.833 } 00:19:55.833 ], 00:19:55.833 "core_count": 1 00:19:55.833 } 00:19:55.833 10:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:55.833 10:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:55.833 10:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:55.833 10:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:55.833 | .driver_specific 00:19:55.833 | .nvme_error 00:19:55.833 | .status_code 00:19:55.833 | .command_transient_transport_error' 00:19:56.090 10:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 123 > 0 )) 00:19:56.090 10:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80433 00:19:56.090 10:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80433 ']' 00:19:56.090 10:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80433 00:19:56.090 10:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:56.090 10:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.090 10:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80433 00:19:56.090 killing process with pid 80433 00:19:56.090 Received shutdown signal, test time was about 2.000000 seconds 00:19:56.090 00:19:56.090 Latency(us) 00:19:56.090 [2024-11-19T10:14:09.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.090 [2024-11-19T10:14:09.979Z] =================================================================================================================== 00:19:56.090 [2024-11-19T10:14:09.979Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:56.090 10:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:56.090 10:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:56.090 10:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80433' 00:19:56.090 10:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80433 00:19:56.090 10:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80433 00:19:56.347 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:19:56.347 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:56.347 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:56.347 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:56.347 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:56.347 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80494 00:19:56.347 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:19:56.347 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80494 /var/tmp/bperf.sock 00:19:56.347 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80494 ']' 00:19:56.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:56.347 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:56.347 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:56.347 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:56.347 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:56.347 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:56.347 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:56.347 Zero copy mechanism will not be used. 00:19:56.347 [2024-11-19 10:14:10.101276] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:19:56.347 [2024-11-19 10:14:10.101367] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80494 ] 00:19:56.606 [2024-11-19 10:14:10.243877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.606 [2024-11-19 10:14:10.301619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.606 [2024-11-19 10:14:10.354398] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:56.606 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:56.606 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:56.606 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:56.606 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:56.863 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:56.863 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.863 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:56.863 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.863 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:56.864 10:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:57.431 nvme0n1 00:19:57.431 10:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:57.431 10:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.431 10:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:57.431 10:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.431 10:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:57.431 10:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:57.431 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:57.431 Zero copy mechanism will not be used. 00:19:57.431 Running I/O for 2 seconds... 00:19:57.431 [2024-11-19 10:14:11.274283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.431 [2024-11-19 10:14:11.274380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.431 [2024-11-19 10:14:11.274412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.431 [2024-11-19 10:14:11.279412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.431 [2024-11-19 10:14:11.279519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.431 [2024-11-19 10:14:11.279547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.431 [2024-11-19 10:14:11.284585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.431 [2024-11-19 10:14:11.284677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.431 [2024-11-19 10:14:11.284704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.431 [2024-11-19 10:14:11.289594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.431 [2024-11-19 10:14:11.289870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.431 [2024-11-19 10:14:11.289897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.431 [2024-11-19 10:14:11.294855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.431 [2024-11-19 10:14:11.294966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.431 [2024-11-19 10:14:11.294993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.431 [2024-11-19 10:14:11.299865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.431 [2024-11-19 10:14:11.299970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.431 [2024-11-19 10:14:11.299997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.431 [2024-11-19 10:14:11.304866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.431 [2024-11-19 10:14:11.304964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.431 [2024-11-19 10:14:11.304991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.431 [2024-11-19 10:14:11.309776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.431 [2024-11-19 10:14:11.310092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.431 [2024-11-19 10:14:11.310119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.431 [2024-11-19 10:14:11.314978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.431 [2024-11-19 10:14:11.315056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.431 [2024-11-19 10:14:11.315080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.691 [2024-11-19 10:14:11.319840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.691 [2024-11-19 10:14:11.319948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.691 [2024-11-19 10:14:11.319972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.691 [2024-11-19 10:14:11.324862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.691 [2024-11-19 10:14:11.324953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.691 [2024-11-19 10:14:11.324978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.691 [2024-11-19 10:14:11.329735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.691 [2024-11-19 10:14:11.329978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.691 [2024-11-19 10:14:11.330001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.691 [2024-11-19 10:14:11.334826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.691 [2024-11-19 10:14:11.334934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.691 [2024-11-19 10:14:11.334958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.691 [2024-11-19 10:14:11.339770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.691 [2024-11-19 10:14:11.339855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.691 [2024-11-19 10:14:11.339878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.691 [2024-11-19 10:14:11.344782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.691 [2024-11-19 10:14:11.344854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.691 [2024-11-19 10:14:11.344882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.691 [2024-11-19 10:14:11.349740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.691 [2024-11-19 10:14:11.349988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.691 [2024-11-19 10:14:11.350012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.691 [2024-11-19 10:14:11.354888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.691 [2024-11-19 10:14:11.354982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.691 [2024-11-19 10:14:11.355006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.691 [2024-11-19 10:14:11.359786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.691 [2024-11-19 10:14:11.359881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.691 [2024-11-19 10:14:11.359904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.691 [2024-11-19 10:14:11.364779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.691 [2024-11-19 10:14:11.364871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.691 [2024-11-19 10:14:11.364895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.691 [2024-11-19 10:14:11.369732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.691 [2024-11-19 10:14:11.369958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.691 [2024-11-19 10:14:11.369992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.691 [2024-11-19 10:14:11.374926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.691 [2024-11-19 10:14:11.375001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.691 [2024-11-19 10:14:11.375024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.691 [2024-11-19 10:14:11.379811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.691 [2024-11-19 10:14:11.379906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.691 [2024-11-19 10:14:11.379943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.691 [2024-11-19 10:14:11.384763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.691 [2024-11-19 10:14:11.384836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.384859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.389693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.389935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.389958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.394834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.394911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.394947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.399810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.399888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.399926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.404754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.404840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.404863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.409717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.409963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.409987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.414797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.414893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.414929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.419696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.419792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.419815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.424608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.424680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.424703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.429596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.429810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.429841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.434676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.434764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.434787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.439625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.439711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.439734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.444568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.444664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.444687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.449483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.449693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.449716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.454582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.454677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.454700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.459444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.459548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.459571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.464343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.464429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.464451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.469276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.469350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.469373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.474200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.474304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.474327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.479099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.479196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.479219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.484038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.484131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.484153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.488947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.489032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.489055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.493890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.493978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.494001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.498803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.498887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.498910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.503807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.503887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.503910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.508789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.509012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.509035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.513932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.514005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.692 [2024-11-19 10:14:11.514039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.692 [2024-11-19 10:14:11.518810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.692 [2024-11-19 10:14:11.518884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.693 [2024-11-19 10:14:11.518907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.693 [2024-11-19 10:14:11.523745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.693 [2024-11-19 10:14:11.523818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.693 [2024-11-19 10:14:11.523841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.693 [2024-11-19 10:14:11.528741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.693 [2024-11-19 10:14:11.528969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.693 [2024-11-19 10:14:11.528993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.693 [2024-11-19 10:14:11.533867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.693 [2024-11-19 10:14:11.533960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.693 [2024-11-19 10:14:11.533984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.693 [2024-11-19 10:14:11.538755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.693 [2024-11-19 10:14:11.538829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.693 [2024-11-19 10:14:11.538852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.693 [2024-11-19 10:14:11.543688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.693 [2024-11-19 10:14:11.543789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.693 [2024-11-19 10:14:11.543812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.693 [2024-11-19 10:14:11.548633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.693 [2024-11-19 10:14:11.548862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.693 [2024-11-19 10:14:11.548885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.693 [2024-11-19 10:14:11.553784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.693 [2024-11-19 10:14:11.553878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.693 [2024-11-19 10:14:11.553900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.693 [2024-11-19 10:14:11.558753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.693 [2024-11-19 10:14:11.558827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.693 [2024-11-19 10:14:11.558851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.693 [2024-11-19 10:14:11.563675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.693 [2024-11-19 10:14:11.563746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.693 [2024-11-19 10:14:11.563769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.693 [2024-11-19 10:14:11.568673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.693 [2024-11-19 10:14:11.568891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.693 [2024-11-19 10:14:11.568928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.693 [2024-11-19 10:14:11.573870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.693 [2024-11-19 10:14:11.573963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.693 [2024-11-19 10:14:11.573987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.953 [2024-11-19 10:14:11.578801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.953 [2024-11-19 10:14:11.578898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.953 [2024-11-19 10:14:11.578936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.953 [2024-11-19 10:14:11.583821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.953 [2024-11-19 10:14:11.583935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.953 [2024-11-19 10:14:11.583959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.953 [2024-11-19 10:14:11.588807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.953 [2024-11-19 10:14:11.589049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.589072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.593975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.594070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.594093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.598884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.598995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.599018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.603820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.603894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.603931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.608768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.609012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.609035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.613863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.613956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.613979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.618780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.618875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.618897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.623712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.623797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.623819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.628680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.628907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.628945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.633776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.633874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.633897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.638723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.638798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.638822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.643650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.643732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.643757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.648660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.648949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.648976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.653871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.653972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.653998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.658892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.658995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.659020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.663864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.663969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.663994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.668838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.669101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.669126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.673985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.674087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.674111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.678926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.679007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.679032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.683930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.684009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.684034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.688870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.688981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.689007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.693811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.693888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.693924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.698710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.698783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.698805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.703650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.703862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.703885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.708781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.708865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.708888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.713701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.713774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.713796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.718609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.718683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.954 [2024-11-19 10:14:11.718705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.954 [2024-11-19 10:14:11.723526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.954 [2024-11-19 10:14:11.723760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.723783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.955 [2024-11-19 10:14:11.728683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.955 [2024-11-19 10:14:11.728759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.728782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.955 [2024-11-19 10:14:11.733643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.955 [2024-11-19 10:14:11.733745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.733768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.955 [2024-11-19 10:14:11.738591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.955 [2024-11-19 10:14:11.738686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.738709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.955 [2024-11-19 10:14:11.743606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.955 [2024-11-19 10:14:11.743812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.743835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.955 [2024-11-19 10:14:11.748802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.955 [2024-11-19 10:14:11.748897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.748934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.955 [2024-11-19 10:14:11.753751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.955 [2024-11-19 10:14:11.753827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.753850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.955 [2024-11-19 10:14:11.758726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.955 [2024-11-19 10:14:11.758802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.758824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.955 [2024-11-19 10:14:11.763662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.955 [2024-11-19 10:14:11.763869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.763892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.955 [2024-11-19 10:14:11.768815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.955 [2024-11-19 10:14:11.768890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.768912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.955 [2024-11-19 10:14:11.773733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.955 [2024-11-19 10:14:11.773806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.773840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.955 [2024-11-19 10:14:11.778661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.955 [2024-11-19 10:14:11.778756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.778778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.955 [2024-11-19 10:14:11.783642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.955 [2024-11-19 10:14:11.783879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.783901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.955 [2024-11-19 10:14:11.788746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.955 [2024-11-19 10:14:11.788843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.788865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.955 [2024-11-19 10:14:11.793742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.955 [2024-11-19 10:14:11.793817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.793840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.955 [2024-11-19 10:14:11.798689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.955 [2024-11-19 10:14:11.798773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.798797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.955 [2024-11-19 10:14:11.803695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.955 [2024-11-19 10:14:11.803936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.803959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.955 [2024-11-19 10:14:11.808790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.955 [2024-11-19 10:14:11.808864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.808886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.955 [2024-11-19 10:14:11.813775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.955 [2024-11-19 10:14:11.813854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.813879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.955 [2024-11-19 10:14:11.818684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.955 [2024-11-19 10:14:11.818756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.818779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:57.955 [2024-11-19 10:14:11.823611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.955 [2024-11-19 10:14:11.823841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.823864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:57.955 [2024-11-19 10:14:11.828733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.955 [2024-11-19 10:14:11.828819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.828841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:57.955 [2024-11-19 10:14:11.833660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.955 [2024-11-19 10:14:11.833734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.833756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:57.955 [2024-11-19 10:14:11.838599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:57.955 [2024-11-19 10:14:11.838684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.955 [2024-11-19 10:14:11.838706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.217 [2024-11-19 10:14:11.843643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.217 [2024-11-19 10:14:11.843865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.217 [2024-11-19 10:14:11.843888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.217 [2024-11-19 10:14:11.848791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.217 [2024-11-19 10:14:11.848878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.217 [2024-11-19 10:14:11.848903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.217 [2024-11-19 10:14:11.853804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.217 [2024-11-19 10:14:11.853889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.217 [2024-11-19 10:14:11.853926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.217 [2024-11-19 10:14:11.858792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.217 [2024-11-19 10:14:11.858891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.217 [2024-11-19 10:14:11.858927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.217 [2024-11-19 10:14:11.863841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.217 [2024-11-19 10:14:11.864091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.217 [2024-11-19 10:14:11.864115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.217 [2024-11-19 10:14:11.869224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.217 [2024-11-19 10:14:11.869457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.217 [2024-11-19 10:14:11.869635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.217 [2024-11-19 10:14:11.874406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.217 [2024-11-19 10:14:11.874669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.217 [2024-11-19 10:14:11.874960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.217 [2024-11-19 10:14:11.879665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.217 [2024-11-19 10:14:11.879910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.217 [2024-11-19 10:14:11.880110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.217 [2024-11-19 10:14:11.884741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.217 [2024-11-19 10:14:11.884993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.217 [2024-11-19 10:14:11.885154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.217 [2024-11-19 10:14:11.889830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.217 [2024-11-19 10:14:11.890070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.217 [2024-11-19 10:14:11.890246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.217 [2024-11-19 10:14:11.894997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.217 [2024-11-19 10:14:11.895219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.217 [2024-11-19 10:14:11.895398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.217 [2024-11-19 10:14:11.900045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.217 [2024-11-19 10:14:11.900270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.217 [2024-11-19 10:14:11.900460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.217 [2024-11-19 10:14:11.905145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.217 [2024-11-19 10:14:11.905364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.217 [2024-11-19 10:14:11.905543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.217 [2024-11-19 10:14:11.910231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.217 [2024-11-19 10:14:11.910473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.217 [2024-11-19 10:14:11.910625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.217 [2024-11-19 10:14:11.915355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.217 [2024-11-19 10:14:11.915449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.217 [2024-11-19 10:14:11.915473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.217 [2024-11-19 10:14:11.920279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.217 [2024-11-19 10:14:11.920372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:11.920395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:11.925195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:11.925303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:11.925326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:11.930226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:11.930336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:11.930359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:11.935201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:11.935274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:11.935297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:11.940007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:11.940087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:11.940110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:11.944905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:11.945019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:11.945041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:11.949820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:11.950047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:11.950071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:11.954898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:11.954990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:11.955012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:11.959842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:11.959954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:11.959977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:11.964824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:11.964898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:11.964937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:11.969736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:11.969955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:11.969977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:11.974823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:11.974901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:11.974937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:11.979728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:11.979802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:11.979825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:11.984688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:11.984782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:11.984804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:11.989619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:11.989827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:11.989850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:11.994754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:11.994850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:11.994873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:11.999665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:11.999739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:11.999761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:12.004584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:12.004664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:12.004687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:12.009484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:12.009722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:12.009745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:12.014644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:12.014719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:12.014741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:12.019614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:12.019690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:12.019713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:12.024644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:12.024738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:12.024761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:12.029546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:12.029740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:12.029762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:12.034784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:12.034879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:12.034901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:12.039721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:12.039794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:12.039816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:12.044700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:12.044777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:12.044800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:12.049611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:12.049835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:12.049858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.218 [2024-11-19 10:14:12.054752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.218 [2024-11-19 10:14:12.054826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.218 [2024-11-19 10:14:12.054849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.219 [2024-11-19 10:14:12.059789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.219 [2024-11-19 10:14:12.059864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.219 [2024-11-19 10:14:12.059886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.219 [2024-11-19 10:14:12.064796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.219 [2024-11-19 10:14:12.064868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.219 [2024-11-19 10:14:12.064891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.219 [2024-11-19 10:14:12.069782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.219 [2024-11-19 10:14:12.069999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.219 [2024-11-19 10:14:12.070021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.219 [2024-11-19 10:14:12.074877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.219 [2024-11-19 10:14:12.074966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.219 [2024-11-19 10:14:12.074989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.219 [2024-11-19 10:14:12.079847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.219 [2024-11-19 10:14:12.079927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.219 [2024-11-19 10:14:12.079950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.219 [2024-11-19 10:14:12.084794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.219 [2024-11-19 10:14:12.084887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.219 [2024-11-19 10:14:12.084910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.219 [2024-11-19 10:14:12.089701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.219 [2024-11-19 10:14:12.089934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.219 [2024-11-19 10:14:12.089957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.219 [2024-11-19 10:14:12.094806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.219 [2024-11-19 10:14:12.094882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.219 [2024-11-19 10:14:12.094904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.219 [2024-11-19 10:14:12.099716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.219 [2024-11-19 10:14:12.099812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.219 [2024-11-19 10:14:12.099835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.479 [2024-11-19 10:14:12.104705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.479 [2024-11-19 10:14:12.104798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.479 [2024-11-19 10:14:12.104822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.479 [2024-11-19 10:14:12.109644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.479 [2024-11-19 10:14:12.109853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.479 [2024-11-19 10:14:12.109877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.479 [2024-11-19 10:14:12.114785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.479 [2024-11-19 10:14:12.114869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.479 [2024-11-19 10:14:12.114891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.479 [2024-11-19 10:14:12.119701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.479 [2024-11-19 10:14:12.119763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.479 [2024-11-19 10:14:12.119786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.479 [2024-11-19 10:14:12.124584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.479 [2024-11-19 10:14:12.124656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.479 [2024-11-19 10:14:12.124679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.479 [2024-11-19 10:14:12.129463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.479 [2024-11-19 10:14:12.129709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.479 [2024-11-19 10:14:12.129732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.479 [2024-11-19 10:14:12.134705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.479 [2024-11-19 10:14:12.134954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.479 [2024-11-19 10:14:12.135181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.479 [2024-11-19 10:14:12.139814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.479 [2024-11-19 10:14:12.140041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.479 [2024-11-19 10:14:12.140356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.479 [2024-11-19 10:14:12.144929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.479 [2024-11-19 10:14:12.145146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.479 [2024-11-19 10:14:12.145282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.479 [2024-11-19 10:14:12.150023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.479 [2024-11-19 10:14:12.150100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.479 [2024-11-19 10:14:12.150122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.479 [2024-11-19 10:14:12.155029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.479 [2024-11-19 10:14:12.155126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.479 [2024-11-19 10:14:12.155150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.479 [2024-11-19 10:14:12.159991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.479 [2024-11-19 10:14:12.160086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.479 [2024-11-19 10:14:12.160110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.479 [2024-11-19 10:14:12.164904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.479 [2024-11-19 10:14:12.164999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.479 [2024-11-19 10:14:12.165022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.479 [2024-11-19 10:14:12.169868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.479 [2024-11-19 10:14:12.169960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.169983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.174797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.174889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.174926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.179697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.179789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.179812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.184661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.184864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.184888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.189745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.189819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.189841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.194633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.194725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.194748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.199554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.199621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.199644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.204468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.204658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.204680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.209563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.209656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.209680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.214478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.214569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.214592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.219578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.219663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.219685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.224728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.224934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.224957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.229922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.230050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.230072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.234933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.235035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.235058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.239904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.240027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.240051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.244901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.245129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.245152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.250266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.250520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.250816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.255472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.255706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.255889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.260617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.260864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.261084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.265734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.265966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.266184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.270842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.271077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.271236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.480 6181.00 IOPS, 772.62 MiB/s [2024-11-19T10:14:12.369Z] [2024-11-19 10:14:12.277256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.277472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.277621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.282280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.282500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.282755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.287517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.287731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.288063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.292671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.480 [2024-11-19 10:14:12.292863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.480 [2024-11-19 10:14:12.292887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.480 [2024-11-19 10:14:12.297986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.481 [2024-11-19 10:14:12.298095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.481 [2024-11-19 10:14:12.298118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.481 [2024-11-19 10:14:12.302963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.481 [2024-11-19 10:14:12.303035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.481 [2024-11-19 10:14:12.303058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.481 [2024-11-19 10:14:12.307999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.481 [2024-11-19 10:14:12.308090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.481 [2024-11-19 10:14:12.308113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.481 [2024-11-19 10:14:12.312993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.481 [2024-11-19 10:14:12.313065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.481 [2024-11-19 10:14:12.313086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.481 [2024-11-19 10:14:12.317856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.481 [2024-11-19 10:14:12.317968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.481 [2024-11-19 10:14:12.317991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.481 [2024-11-19 10:14:12.322843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.481 [2024-11-19 10:14:12.322965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.481 [2024-11-19 10:14:12.322988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.481 [2024-11-19 10:14:12.327782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.481 [2024-11-19 10:14:12.327987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.481 [2024-11-19 10:14:12.328009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.481 [2024-11-19 10:14:12.332865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.481 [2024-11-19 10:14:12.332951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.481 [2024-11-19 10:14:12.332985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.481 [2024-11-19 10:14:12.337820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.481 [2024-11-19 10:14:12.337931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.481 [2024-11-19 10:14:12.337954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.481 [2024-11-19 10:14:12.342707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.481 [2024-11-19 10:14:12.342790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.481 [2024-11-19 10:14:12.342812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.481 [2024-11-19 10:14:12.347702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.481 [2024-11-19 10:14:12.347903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.481 [2024-11-19 10:14:12.347939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.481 [2024-11-19 10:14:12.352826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.481 [2024-11-19 10:14:12.352934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.481 [2024-11-19 10:14:12.352957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.481 [2024-11-19 10:14:12.357719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.481 [2024-11-19 10:14:12.357815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.481 [2024-11-19 10:14:12.357838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.481 [2024-11-19 10:14:12.362701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.481 [2024-11-19 10:14:12.362792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.481 [2024-11-19 10:14:12.362816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.367645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.367877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.367900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.372857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.372951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.372987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.377869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.377997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.378020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.382751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.382859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.382881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.387699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.387910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.387946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.392974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.393063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.393086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.397879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.398016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.398040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.402803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.402876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.402898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.407724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.407953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.407976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.412850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.412942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.412976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.417789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.417896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.417918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.422735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.422827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.422850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.427590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.427794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.427816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.432690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.432770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.432793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.437645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.437721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.437744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.442600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.442676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.442698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.447575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.447821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.447846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.452748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.452832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.452856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.457723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.457824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.457848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.462656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.462741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.462765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.467698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.467952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.467977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.472836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.472945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.472968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.477769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.477876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.477899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.482719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.482818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.742 [2024-11-19 10:14:12.482840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.742 [2024-11-19 10:14:12.487707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.742 [2024-11-19 10:14:12.487940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.487963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.492753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.492827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.492849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.497629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.497713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.497736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.502592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.502675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.502697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.507497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.507712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.507734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.512567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.512654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.512676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.517533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.517614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.517637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.522616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.522698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.522720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.527562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.527779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.527801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.532692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.532776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.532799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.537773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.537871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.537894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.542774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.542846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.542869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.547753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.547971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.547993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.552821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.552928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.552951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.557754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.557826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.557848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.562638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.562730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.562759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.567575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.567780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.567802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.572814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.572911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.572935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.577883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.578002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.578026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.582954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.583045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.583069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.588040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.588155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.588192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.592995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.593086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.593110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.597956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.598031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.598054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.602850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.602965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.602988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.607896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.608188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.608211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.613145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.613231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.613253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.618255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.618365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.743 [2024-11-19 10:14:12.618387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:58.743 [2024-11-19 10:14:12.623231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.743 [2024-11-19 10:14:12.623304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.744 [2024-11-19 10:14:12.623326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:58.744 [2024-11-19 10:14:12.628116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:58.744 [2024-11-19 10:14:12.628217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.744 [2024-11-19 10:14:12.628240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.004 [2024-11-19 10:14:12.633102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.004 [2024-11-19 10:14:12.633185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.004 [2024-11-19 10:14:12.633207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.004 [2024-11-19 10:14:12.638045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.004 [2024-11-19 10:14:12.638140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.004 [2024-11-19 10:14:12.638162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.004 [2024-11-19 10:14:12.642961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.004 [2024-11-19 10:14:12.643045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.004 [2024-11-19 10:14:12.643067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.004 [2024-11-19 10:14:12.648014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.004 [2024-11-19 10:14:12.648105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.004 [2024-11-19 10:14:12.648128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.004 [2024-11-19 10:14:12.653008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.004 [2024-11-19 10:14:12.653088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.004 [2024-11-19 10:14:12.653110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.004 [2024-11-19 10:14:12.657984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.004 [2024-11-19 10:14:12.658076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.004 [2024-11-19 10:14:12.658099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.004 [2024-11-19 10:14:12.662889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.004 [2024-11-19 10:14:12.662982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.004 [2024-11-19 10:14:12.663004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.004 [2024-11-19 10:14:12.667823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.004 [2024-11-19 10:14:12.667901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.004 [2024-11-19 10:14:12.667936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.004 [2024-11-19 10:14:12.672743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.004 [2024-11-19 10:14:12.672835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.004 [2024-11-19 10:14:12.672857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.004 [2024-11-19 10:14:12.677693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.004 [2024-11-19 10:14:12.677932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.004 [2024-11-19 10:14:12.677956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.004 [2024-11-19 10:14:12.682759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.004 [2024-11-19 10:14:12.682833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.004 [2024-11-19 10:14:12.682856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.004 [2024-11-19 10:14:12.687691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.004 [2024-11-19 10:14:12.687785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.004 [2024-11-19 10:14:12.687807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.004 [2024-11-19 10:14:12.692590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.004 [2024-11-19 10:14:12.692683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.004 [2024-11-19 10:14:12.692705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.004 [2024-11-19 10:14:12.697548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.004 [2024-11-19 10:14:12.697747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.004 [2024-11-19 10:14:12.697770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.004 [2024-11-19 10:14:12.702635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.004 [2024-11-19 10:14:12.702725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.004 [2024-11-19 10:14:12.702748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.004 [2024-11-19 10:14:12.707644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.004 [2024-11-19 10:14:12.707744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.004 [2024-11-19 10:14:12.707767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.004 [2024-11-19 10:14:12.712594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.004 [2024-11-19 10:14:12.712667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.712690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.005 [2024-11-19 10:14:12.717479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.005 [2024-11-19 10:14:12.717707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.717730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.005 [2024-11-19 10:14:12.722654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.005 [2024-11-19 10:14:12.722728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.722750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.005 [2024-11-19 10:14:12.727517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.005 [2024-11-19 10:14:12.727598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.727621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.005 [2024-11-19 10:14:12.732440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.005 [2024-11-19 10:14:12.732532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.732555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.005 [2024-11-19 10:14:12.737341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.005 [2024-11-19 10:14:12.737560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.737583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.005 [2024-11-19 10:14:12.742416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.005 [2024-11-19 10:14:12.742621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.742838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.005 [2024-11-19 10:14:12.747452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.005 [2024-11-19 10:14:12.747677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.747933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.005 [2024-11-19 10:14:12.752689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.005 [2024-11-19 10:14:12.752945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.753129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.005 [2024-11-19 10:14:12.757818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.005 [2024-11-19 10:14:12.758072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.758274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.005 [2024-11-19 10:14:12.763008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.005 [2024-11-19 10:14:12.763256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.763440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.005 [2024-11-19 10:14:12.768141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.005 [2024-11-19 10:14:12.768446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.768708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.005 [2024-11-19 10:14:12.773372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.005 [2024-11-19 10:14:12.773628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.773851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.005 [2024-11-19 10:14:12.778542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.005 [2024-11-19 10:14:12.778748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.778772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.005 [2024-11-19 10:14:12.783841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.005 [2024-11-19 10:14:12.784060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.784338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.005 [2024-11-19 10:14:12.788988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.005 [2024-11-19 10:14:12.789227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.789403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.005 [2024-11-19 10:14:12.794132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.005 [2024-11-19 10:14:12.794375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.794544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.005 [2024-11-19 10:14:12.799179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.005 [2024-11-19 10:14:12.799395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.799597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.005 [2024-11-19 10:14:12.804300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.005 [2024-11-19 10:14:12.804543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.804780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.005 [2024-11-19 10:14:12.809397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.005 [2024-11-19 10:14:12.809614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.809637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.005 [2024-11-19 10:14:12.814458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.005 [2024-11-19 10:14:12.814550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.814573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.005 [2024-11-19 10:14:12.819344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.005 [2024-11-19 10:14:12.819435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.819458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.005 [2024-11-19 10:14:12.824336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.005 [2024-11-19 10:14:12.824553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.824576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.005 [2024-11-19 10:14:12.829400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.005 [2024-11-19 10:14:12.829498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.005 [2024-11-19 10:14:12.829522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.006 [2024-11-19 10:14:12.834335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.006 [2024-11-19 10:14:12.834431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.006 [2024-11-19 10:14:12.834454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.006 [2024-11-19 10:14:12.839273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.006 [2024-11-19 10:14:12.839366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.006 [2024-11-19 10:14:12.839388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.006 [2024-11-19 10:14:12.844221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.006 [2024-11-19 10:14:12.844293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.006 [2024-11-19 10:14:12.844316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.006 [2024-11-19 10:14:12.849069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.006 [2024-11-19 10:14:12.849139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.006 [2024-11-19 10:14:12.849162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.006 [2024-11-19 10:14:12.854017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.006 [2024-11-19 10:14:12.854110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.006 [2024-11-19 10:14:12.854132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.006 [2024-11-19 10:14:12.858867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.006 [2024-11-19 10:14:12.858979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.006 [2024-11-19 10:14:12.859013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.006 [2024-11-19 10:14:12.863780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.006 [2024-11-19 10:14:12.864014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.006 [2024-11-19 10:14:12.864036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.006 [2024-11-19 10:14:12.868908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.006 [2024-11-19 10:14:12.868994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.006 [2024-11-19 10:14:12.869017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.006 [2024-11-19 10:14:12.873797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.006 [2024-11-19 10:14:12.873874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.006 [2024-11-19 10:14:12.873896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.006 [2024-11-19 10:14:12.878739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.006 [2024-11-19 10:14:12.878817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.006 [2024-11-19 10:14:12.878840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.006 [2024-11-19 10:14:12.883697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.006 [2024-11-19 10:14:12.883953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.006 [2024-11-19 10:14:12.883978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.006 [2024-11-19 10:14:12.888847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.006 [2024-11-19 10:14:12.888964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.006 [2024-11-19 10:14:12.888987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.266 [2024-11-19 10:14:12.893849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.266 [2024-11-19 10:14:12.893945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.266 [2024-11-19 10:14:12.893968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.266 [2024-11-19 10:14:12.898751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.266 [2024-11-19 10:14:12.898847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.266 [2024-11-19 10:14:12.898870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.266 [2024-11-19 10:14:12.903691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.266 [2024-11-19 10:14:12.903896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.266 [2024-11-19 10:14:12.903932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.266 [2024-11-19 10:14:12.908859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.266 [2024-11-19 10:14:12.908959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.266 [2024-11-19 10:14:12.908983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.266 [2024-11-19 10:14:12.913792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.266 [2024-11-19 10:14:12.913868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.266 [2024-11-19 10:14:12.913891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.266 [2024-11-19 10:14:12.918759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.266 [2024-11-19 10:14:12.918855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.266 [2024-11-19 10:14:12.918878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.266 [2024-11-19 10:14:12.923688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.266 [2024-11-19 10:14:12.923926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.266 [2024-11-19 10:14:12.923949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.266 [2024-11-19 10:14:12.928827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.266 [2024-11-19 10:14:12.928900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.266 [2024-11-19 10:14:12.928923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.266 [2024-11-19 10:14:12.933797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.266 [2024-11-19 10:14:12.933888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.266 [2024-11-19 10:14:12.933911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.266 [2024-11-19 10:14:12.938789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.266 [2024-11-19 10:14:12.938899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.266 [2024-11-19 10:14:12.938922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.266 [2024-11-19 10:14:12.943741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.266 [2024-11-19 10:14:12.943964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.266 [2024-11-19 10:14:12.943986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.266 [2024-11-19 10:14:12.948825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.266 [2024-11-19 10:14:12.948930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.266 [2024-11-19 10:14:12.948954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.266 [2024-11-19 10:14:12.953720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.266 [2024-11-19 10:14:12.953816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.266 [2024-11-19 10:14:12.953838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.266 [2024-11-19 10:14:12.958738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.266 [2024-11-19 10:14:12.958832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.266 [2024-11-19 10:14:12.958855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.266 [2024-11-19 10:14:12.963752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.266 [2024-11-19 10:14:12.964003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.266 [2024-11-19 10:14:12.964027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.266 [2024-11-19 10:14:12.968872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.266 [2024-11-19 10:14:12.969000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.266 [2024-11-19 10:14:12.969023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.266 [2024-11-19 10:14:12.973816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.266 [2024-11-19 10:14:12.973941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.266 [2024-11-19 10:14:12.973978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.266 [2024-11-19 10:14:12.978783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.266 [2024-11-19 10:14:12.978883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.266 [2024-11-19 10:14:12.978906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.266 [2024-11-19 10:14:12.983752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.266 [2024-11-19 10:14:12.983991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:12.984014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:12.988962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:12.989088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:12.989111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:12.993905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:12.994010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:12.994032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:12.998806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:12.998914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:12.998938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.003751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.003998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:13.004021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.008957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.009058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:13.009081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.013896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.014001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:13.014025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.018820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.018929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:13.018952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.023739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.023953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:13.023976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.028805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.028899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:13.028935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.033757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.033864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:13.033887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.038817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.038904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:13.038926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.043778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.044016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:13.044038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.048910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.049006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:13.049028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.053825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.053932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:13.053968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.058755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.058844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:13.058866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.063717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.063970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:13.063993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.068822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.068901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:13.068923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.073816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.073901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:13.073936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.078776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.078846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:13.078868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.083685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.083902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:13.083924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.088898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.089026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:13.089049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.093996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.094082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:13.094106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.098940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.099032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:13.099055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.103790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.104024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:13.104047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.108865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.108973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:13.108995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.113792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.113876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.267 [2024-11-19 10:14:13.113898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.267 [2024-11-19 10:14:13.118714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.267 [2024-11-19 10:14:13.118786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.268 [2024-11-19 10:14:13.118808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.268 [2024-11-19 10:14:13.123617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.268 [2024-11-19 10:14:13.123830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.268 [2024-11-19 10:14:13.123852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.268 [2024-11-19 10:14:13.128765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.268 [2024-11-19 10:14:13.128842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.268 [2024-11-19 10:14:13.128864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.268 [2024-11-19 10:14:13.133754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.268 [2024-11-19 10:14:13.133842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.268 [2024-11-19 10:14:13.133864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.268 [2024-11-19 10:14:13.138757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.268 [2024-11-19 10:14:13.138867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.268 [2024-11-19 10:14:13.138890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.268 [2024-11-19 10:14:13.143734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.268 [2024-11-19 10:14:13.144001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.268 [2024-11-19 10:14:13.144025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.268 [2024-11-19 10:14:13.148930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.268 [2024-11-19 10:14:13.149039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.268 [2024-11-19 10:14:13.149061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.527 [2024-11-19 10:14:13.153899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.527 [2024-11-19 10:14:13.154009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.154032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.528 [2024-11-19 10:14:13.158847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.528 [2024-11-19 10:14:13.158968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.158991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.528 [2024-11-19 10:14:13.163792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.528 [2024-11-19 10:14:13.164026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.164048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.528 [2024-11-19 10:14:13.168917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.528 [2024-11-19 10:14:13.169017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.169039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.528 [2024-11-19 10:14:13.173871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.528 [2024-11-19 10:14:13.173985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.174008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.528 [2024-11-19 10:14:13.178732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.528 [2024-11-19 10:14:13.178820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.178843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.528 [2024-11-19 10:14:13.183788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.528 [2024-11-19 10:14:13.183997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.184030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.528 [2024-11-19 10:14:13.188882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.528 [2024-11-19 10:14:13.189005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.189028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.528 [2024-11-19 10:14:13.193883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.528 [2024-11-19 10:14:13.193980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.194003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.528 [2024-11-19 10:14:13.199046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.528 [2024-11-19 10:14:13.199147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.199171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.528 [2024-11-19 10:14:13.204157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.528 [2024-11-19 10:14:13.204275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.204297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.528 [2024-11-19 10:14:13.209287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.528 [2024-11-19 10:14:13.209357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.209378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.528 [2024-11-19 10:14:13.214384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.528 [2024-11-19 10:14:13.214488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.214509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.528 [2024-11-19 10:14:13.219615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.528 [2024-11-19 10:14:13.219707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.219729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.528 [2024-11-19 10:14:13.224595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.528 [2024-11-19 10:14:13.224681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.224704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.528 [2024-11-19 10:14:13.229570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.528 [2024-11-19 10:14:13.229661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.229684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.528 [2024-11-19 10:14:13.234532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.528 [2024-11-19 10:14:13.234752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.234785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.528 [2024-11-19 10:14:13.239755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.528 [2024-11-19 10:14:13.239847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.239870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.528 [2024-11-19 10:14:13.244833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.528 [2024-11-19 10:14:13.244910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.244946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.528 [2024-11-19 10:14:13.249846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.528 [2024-11-19 10:14:13.249938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.249974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.528 [2024-11-19 10:14:13.254838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.528 [2024-11-19 10:14:13.255065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.255087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.528 [2024-11-19 10:14:13.260041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.528 [2024-11-19 10:14:13.260134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.260157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:59.528 [2024-11-19 10:14:13.265115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.528 [2024-11-19 10:14:13.265217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.265240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.528 [2024-11-19 10:14:13.270212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13be8f0) with pdu=0x2000166fef90 00:19:59.528 [2024-11-19 10:14:13.270308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.528 [2024-11-19 10:14:13.270331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.528 6176.50 IOPS, 772.06 MiB/s 00:19:59.528 Latency(us) 00:19:59.528 [2024-11-19T10:14:13.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.528 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:59.528 nvme0n1 : 2.00 6173.00 771.62 0.00 0.00 2586.00 1571.37 6791.91 00:19:59.528 [2024-11-19T10:14:13.418Z] =================================================================================================================== 00:19:59.529 [2024-11-19T10:14:13.418Z] Total : 6173.00 771.62 0.00 0.00 2586.00 1571.37 6791.91 00:19:59.529 { 00:19:59.529 "results": [ 00:19:59.529 { 00:19:59.529 "job": "nvme0n1", 00:19:59.529 "core_mask": "0x2", 00:19:59.529 "workload": "randwrite", 00:19:59.529 "status": "finished", 00:19:59.529 "queue_depth": 16, 00:19:59.529 "io_size": 131072, 00:19:59.529 "runtime": 2.003727, 00:19:59.529 "iops": 6172.996620797145, 00:19:59.529 "mibps": 771.6245775996431, 00:19:59.529 "io_failed": 0, 00:19:59.529 "io_timeout": 0, 00:19:59.529 "avg_latency_us": 2586.0017833439906, 00:19:59.529 "min_latency_us": 1571.3745454545453, 00:19:59.529 "max_latency_us": 6791.912727272727 00:19:59.529 } 00:19:59.529 ], 00:19:59.529 "core_count": 1 00:19:59.529 } 00:19:59.529 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:59.529 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:59.529 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:59.529 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:59.529 | .driver_specific 00:19:59.529 | .nvme_error 00:19:59.529 | .status_code 00:19:59.529 | .command_transient_transport_error' 00:19:59.787 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 399 > 0 )) 00:19:59.787 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80494 00:19:59.787 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80494 ']' 00:19:59.787 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80494 00:19:59.787 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:59.787 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.787 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80494 00:19:59.787 killing process with pid 80494 00:19:59.787 Received shutdown signal, test time was about 2.000000 seconds 00:19:59.787 00:19:59.787 Latency(us) 00:19:59.787 [2024-11-19T10:14:13.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.787 [2024-11-19T10:14:13.676Z] =================================================================================================================== 00:19:59.787 [2024-11-19T10:14:13.676Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.787 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:59.787 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:59.787 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80494' 00:19:59.787 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80494 00:19:59.787 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80494 00:20:00.046 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80295 00:20:00.046 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80295 ']' 00:20:00.046 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80295 00:20:00.046 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:00.046 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.046 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80295 00:20:00.046 killing process with pid 80295 00:20:00.046 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:00.046 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:00.046 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80295' 00:20:00.046 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80295 00:20:00.046 10:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80295 00:20:00.304 00:20:00.304 real 0m17.156s 00:20:00.304 user 0m33.609s 00:20:00.304 sys 0m4.435s 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.304 ************************************ 00:20:00.304 END TEST nvmf_digest_error 00:20:00.304 ************************************ 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:00.304 rmmod nvme_tcp 00:20:00.304 rmmod nvme_fabrics 00:20:00.304 rmmod nvme_keyring 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80295 ']' 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80295 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 80295 ']' 00:20:00.304 Process with pid 80295 is not found 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 80295 00:20:00.304 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80295) - No such process 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 80295 is not found' 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:00.304 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:20:00.563 00:20:00.563 real 0m33.977s 00:20:00.563 user 1m4.947s 00:20:00.563 sys 0m9.406s 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.563 ************************************ 00:20:00.563 END TEST nvmf_digest 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:00.563 ************************************ 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.563 ************************************ 00:20:00.563 START TEST nvmf_host_multipath 00:20:00.563 ************************************ 00:20:00.563 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:00.823 * Looking for test storage... 00:20:00.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:00.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.823 --rc genhtml_branch_coverage=1 00:20:00.823 --rc genhtml_function_coverage=1 00:20:00.823 --rc genhtml_legend=1 00:20:00.823 --rc geninfo_all_blocks=1 00:20:00.823 --rc geninfo_unexecuted_blocks=1 00:20:00.823 00:20:00.823 ' 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:00.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.823 --rc genhtml_branch_coverage=1 00:20:00.823 --rc genhtml_function_coverage=1 00:20:00.823 --rc genhtml_legend=1 00:20:00.823 --rc geninfo_all_blocks=1 00:20:00.823 --rc geninfo_unexecuted_blocks=1 00:20:00.823 00:20:00.823 ' 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:00.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.823 --rc genhtml_branch_coverage=1 00:20:00.823 --rc genhtml_function_coverage=1 00:20:00.823 --rc genhtml_legend=1 00:20:00.823 --rc geninfo_all_blocks=1 00:20:00.823 --rc geninfo_unexecuted_blocks=1 00:20:00.823 00:20:00.823 ' 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:00.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.823 --rc genhtml_branch_coverage=1 00:20:00.823 --rc genhtml_function_coverage=1 00:20:00.823 --rc genhtml_legend=1 00:20:00.823 --rc geninfo_all_blocks=1 00:20:00.823 --rc geninfo_unexecuted_blocks=1 00:20:00.823 00:20:00.823 ' 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.823 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:00.824 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:00.824 Cannot find device "nvmf_init_br" 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:00.824 Cannot find device "nvmf_init_br2" 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:20:00.824 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:01.083 Cannot find device "nvmf_tgt_br" 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:01.083 Cannot find device "nvmf_tgt_br2" 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:01.083 Cannot find device "nvmf_init_br" 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:01.083 Cannot find device "nvmf_init_br2" 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:01.083 Cannot find device "nvmf_tgt_br" 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:01.083 Cannot find device "nvmf_tgt_br2" 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:01.083 Cannot find device "nvmf_br" 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:01.083 Cannot find device "nvmf_init_if" 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:01.083 Cannot find device "nvmf_init_if2" 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:01.083 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:01.083 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:01.083 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:01.342 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:01.342 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:01.342 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:01.342 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:01.342 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:01.342 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:01.342 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:20:01.342 00:20:01.342 --- 10.0.0.3 ping statistics --- 00:20:01.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.342 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:01.342 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:01.342 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:20:01.342 00:20:01.342 --- 10.0.0.4 ping statistics --- 00:20:01.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.342 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:01.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:20:01.342 00:20:01.342 --- 10.0.0.1 ping statistics --- 00:20:01.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.342 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:01.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:20:01.342 00:20:01.342 --- 10.0.0.2 ping statistics --- 00:20:01.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.342 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:01.342 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:01.343 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:01.343 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80803 00:20:01.343 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:01.343 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80803 00:20:01.343 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80803 ']' 00:20:01.343 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.343 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.343 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.343 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.343 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:01.343 [2024-11-19 10:14:15.174872] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:20:01.343 [2024-11-19 10:14:15.174997] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.601 [2024-11-19 10:14:15.329118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:01.601 [2024-11-19 10:14:15.396936] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.601 [2024-11-19 10:14:15.397013] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.601 [2024-11-19 10:14:15.397028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.601 [2024-11-19 10:14:15.397038] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.601 [2024-11-19 10:14:15.397047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.601 [2024-11-19 10:14:15.398301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.601 [2024-11-19 10:14:15.398315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.601 [2024-11-19 10:14:15.455302] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:02.538 10:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.538 10:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:20:02.538 10:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:02.538 10:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:02.538 10:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:02.538 10:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.538 10:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80803 00:20:02.538 10:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:02.818 [2024-11-19 10:14:16.507240] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.818 10:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:03.077 Malloc0 00:20:03.077 10:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:03.335 10:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:03.594 10:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:03.852 [2024-11-19 10:14:17.661869] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:03.852 10:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:04.112 [2024-11-19 10:14:17.910051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:04.112 10:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80859 00:20:04.112 10:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:04.112 10:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:04.112 10:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80859 /var/tmp/bdevperf.sock 00:20:04.112 10:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80859 ']' 00:20:04.112 10:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.112 10:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.112 10:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.112 10:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.112 10:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:05.485 10:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.485 10:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:20:05.485 10:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:05.485 10:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:05.743 Nvme0n1 00:20:05.743 10:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:06.311 Nvme0n1 00:20:06.311 10:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:20:06.311 10:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:07.246 10:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:20:07.246 10:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:07.505 10:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:07.763 10:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:20:07.763 10:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80910 00:20:07.763 10:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80803 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:07.763 10:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:14.368 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:14.368 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:14.368 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:14.368 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:14.368 Attaching 4 probes... 00:20:14.368 @path[10.0.0.3, 4421]: 13079 00:20:14.368 @path[10.0.0.3, 4421]: 13456 00:20:14.368 @path[10.0.0.3, 4421]: 13278 00:20:14.368 @path[10.0.0.3, 4421]: 13347 00:20:14.368 @path[10.0.0.3, 4421]: 13329 00:20:14.368 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:14.368 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:14.368 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:14.368 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:14.368 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:14.368 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:14.368 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80910 00:20:14.368 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:14.368 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:20:14.368 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:14.368 10:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:14.626 10:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:20:14.626 10:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81023 00:20:14.626 10:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:14.626 10:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80803 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:21.187 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:21.187 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:21.187 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:21.187 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:21.187 Attaching 4 probes... 00:20:21.187 @path[10.0.0.3, 4420]: 17578 00:20:21.187 @path[10.0.0.3, 4420]: 17848 00:20:21.187 @path[10.0.0.3, 4420]: 17799 00:20:21.187 @path[10.0.0.3, 4420]: 17872 00:20:21.187 @path[10.0.0.3, 4420]: 17878 00:20:21.187 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:21.187 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:21.188 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:21.188 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:21.188 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:21.188 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:21.188 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81023 00:20:21.188 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:21.188 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:20:21.188 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:21.188 10:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:21.446 10:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:20:21.446 10:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80803 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:21.446 10:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81141 00:20:21.446 10:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:28.027 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:28.027 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:28.027 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:28.027 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:28.027 Attaching 4 probes... 00:20:28.027 @path[10.0.0.3, 4421]: 13491 00:20:28.027 @path[10.0.0.3, 4421]: 17259 00:20:28.027 @path[10.0.0.3, 4421]: 17315 00:20:28.027 @path[10.0.0.3, 4421]: 17542 00:20:28.027 @path[10.0.0.3, 4421]: 17599 00:20:28.027 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:28.027 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:28.027 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:28.027 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:28.027 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:28.027 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:28.027 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81141 00:20:28.027 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:28.027 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:20:28.027 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:28.027 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:28.285 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:20:28.285 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81254 00:20:28.285 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80803 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:28.285 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:34.920 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:34.920 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:20:34.920 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:20:34.920 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:34.920 Attaching 4 probes... 00:20:34.920 00:20:34.920 00:20:34.920 00:20:34.920 00:20:34.920 00:20:34.920 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:34.920 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:34.920 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:34.920 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:20:34.920 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:20:34.920 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:20:34.920 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81254 00:20:34.920 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:34.920 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:20:34.920 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:34.921 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:35.180 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:20:35.180 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80803 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:35.180 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81366 00:20:35.180 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:41.746 10:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:41.746 10:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:41.746 10:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:41.746 10:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:41.746 Attaching 4 probes... 00:20:41.746 @path[10.0.0.3, 4421]: 16679 00:20:41.746 @path[10.0.0.3, 4421]: 16876 00:20:41.746 @path[10.0.0.3, 4421]: 17032 00:20:41.746 @path[10.0.0.3, 4421]: 17377 00:20:41.746 @path[10.0.0.3, 4421]: 17244 00:20:41.746 10:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:41.746 10:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:41.746 10:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:41.746 10:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:41.746 10:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:41.746 10:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:41.746 10:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81366 00:20:41.746 10:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:41.746 10:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:41.746 10:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:20:42.684 10:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:20:42.684 10:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81490 00:20:42.684 10:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80803 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:42.684 10:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:49.353 10:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:49.353 10:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:49.353 10:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:49.353 10:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:49.353 Attaching 4 probes... 00:20:49.353 @path[10.0.0.3, 4420]: 17473 00:20:49.353 @path[10.0.0.3, 4420]: 17553 00:20:49.353 @path[10.0.0.3, 4420]: 17764 00:20:49.353 @path[10.0.0.3, 4420]: 17861 00:20:49.353 @path[10.0.0.3, 4420]: 17482 00:20:49.353 10:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:49.353 10:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:49.353 10:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:49.353 10:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:49.353 10:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:49.353 10:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:49.353 10:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81490 00:20:49.353 10:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:49.353 10:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:49.353 [2024-11-19 10:15:03.071864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:49.353 10:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:49.612 10:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:20:56.176 10:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:20:56.176 10:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81664 00:20:56.176 10:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80803 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:56.176 10:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:02.751 Attaching 4 probes... 00:21:02.751 @path[10.0.0.3, 4421]: 16935 00:21:02.751 @path[10.0.0.3, 4421]: 17253 00:21:02.751 @path[10.0.0.3, 4421]: 17268 00:21:02.751 @path[10.0.0.3, 4421]: 17204 00:21:02.751 @path[10.0.0.3, 4421]: 17329 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81664 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80859 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80859 ']' 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80859 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80859 00:21:02.751 killing process with pid 80859 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80859' 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80859 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80859 00:21:02.751 { 00:21:02.751 "results": [ 00:21:02.751 { 00:21:02.751 "job": "Nvme0n1", 00:21:02.751 "core_mask": "0x4", 00:21:02.751 "workload": "verify", 00:21:02.751 "status": "terminated", 00:21:02.751 "verify_range": { 00:21:02.751 "start": 0, 00:21:02.751 "length": 16384 00:21:02.751 }, 00:21:02.751 "queue_depth": 128, 00:21:02.751 "io_size": 4096, 00:21:02.751 "runtime": 55.68131, 00:21:02.751 "iops": 7165.887440507417, 00:21:02.751 "mibps": 27.991747814482096, 00:21:02.751 "io_failed": 0, 00:21:02.751 "io_timeout": 0, 00:21:02.751 "avg_latency_us": 17831.093396690772, 00:21:02.751 "min_latency_us": 826.6472727272727, 00:21:02.751 "max_latency_us": 7046430.72 00:21:02.751 } 00:21:02.751 ], 00:21:02.751 "core_count": 1 00:21:02.751 } 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80859 00:21:02.751 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:02.751 [2024-11-19 10:14:17.990908] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:21:02.751 [2024-11-19 10:14:17.991671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80859 ] 00:21:02.751 [2024-11-19 10:14:18.142161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.751 [2024-11-19 10:14:18.223889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.751 [2024-11-19 10:14:18.283840] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:02.751 Running I/O for 90 seconds... 00:21:02.751 6932.00 IOPS, 27.08 MiB/s [2024-11-19T10:15:16.640Z] 6878.00 IOPS, 26.87 MiB/s [2024-11-19T10:15:16.640Z] 6846.33 IOPS, 26.74 MiB/s [2024-11-19T10:15:16.640Z] 6799.00 IOPS, 26.56 MiB/s [2024-11-19T10:15:16.640Z] 6770.40 IOPS, 26.45 MiB/s [2024-11-19T10:15:16.640Z] 6751.33 IOPS, 26.37 MiB/s [2024-11-19T10:15:16.640Z] 6737.71 IOPS, 26.32 MiB/s [2024-11-19T10:15:16.640Z] 6727.50 IOPS, 26.28 MiB/s [2024-11-19T10:15:16.640Z] [2024-11-19 10:14:28.426301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.751 [2024-11-19 10:14:28.426388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:02.751 [2024-11-19 10:14:28.426452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.751 [2024-11-19 10:14:28.426475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.751 [2024-11-19 10:14:28.426499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.751 [2024-11-19 10:14:28.426515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:02.751 [2024-11-19 10:14:28.426536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.751 [2024-11-19 10:14:28.426552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:02.751 [2024-11-19 10:14:28.426573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.751 [2024-11-19 10:14:28.426589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:02.751 [2024-11-19 10:14:28.426611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.751 [2024-11-19 10:14:28.426626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:02.751 [2024-11-19 10:14:28.426648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.426663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.426685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.426700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.426722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-19 10:14:28.426737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.426759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-19 10:14:28.426806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.426831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-19 10:14:28.426847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.426869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:58128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-19 10:14:28.426884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.426905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-19 10:14:28.426937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.426961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-19 10:14:28.426976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.427000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:58152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-19 10:14:28.427015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.427036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:58160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-19 10:14:28.427053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.427334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.427359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.427382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.427399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.427420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.427435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.427457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.427473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.427494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.427511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.427532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.427559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.427584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.427601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.427623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.427639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.427667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.427684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.427707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.427723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.427745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.427760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.427782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.427798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.427820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.427835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.427857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.427873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.427895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.427910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.427949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.427966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.427988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.428003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.428024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.428040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.428072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.428089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.428111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.428127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.428149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.428164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.428205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.428222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.428244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.428259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.428281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.752 [2024-11-19 10:14:28.428296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.428318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-19 10:14:28.428333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.428355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:58176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-19 10:14:28.428371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.428393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-19 10:14:28.428408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.428430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-19 10:14:28.428445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:02.752 [2024-11-19 10:14:28.428467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-19 10:14:28.428482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.428504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-19 10:14:28.428520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.428551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-19 10:14:28.428568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.428590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-19 10:14:28.428606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.428628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.428643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.428665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.428680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.428702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.428718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.428739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.428755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.428776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.428792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.428813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.428829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.428851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.428867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.428888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.428904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.428941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.428959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.428981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.428997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.429019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.429043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.429067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.429084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.429105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.429120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.429142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.429158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.429180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.429195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.429216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.429232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.429253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-19 10:14:28.429269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.429291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-19 10:14:28.429306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.429328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-19 10:14:28.429344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.429365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:58256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-19 10:14:28.429381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.429402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-19 10:14:28.429417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.429439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:58272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-19 10:14:28.429454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.429475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-19 10:14:28.429497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.429521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:58288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-19 10:14:28.429537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.429641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.429663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.429686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.429704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.429726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.429742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.429763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.429779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.429801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.429817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.429839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.429855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.429877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.429892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.429927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.429946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.429969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.429984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.430006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.430021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.430043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.430059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:02.753 [2024-11-19 10:14:28.430094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.753 [2024-11-19 10:14:28.430111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.430133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.754 [2024-11-19 10:14:28.430149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.430171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.754 [2024-11-19 10:14:28.430186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.430207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.754 [2024-11-19 10:14:28.430223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.430246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.754 [2024-11-19 10:14:28.430261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.430283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.754 [2024-11-19 10:14:28.430298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.430320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.754 [2024-11-19 10:14:28.430336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.430357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.754 [2024-11-19 10:14:28.430373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.430394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.754 [2024-11-19 10:14:28.430410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.430432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:58296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-19 10:14:28.430455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.430477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-19 10:14:28.430493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.430515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-19 10:14:28.430530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.430560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:58320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-19 10:14:28.430578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.430600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-19 10:14:28.430615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.430637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:58336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-19 10:14:28.430652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.430674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:58344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-19 10:14:28.430689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.430711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-19 10:14:28.430726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.430748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-19 10:14:28.430764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.430785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:58368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-19 10:14:28.430800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.430822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:58376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-19 10:14:28.430837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.430859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-19 10:14:28.430874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.430896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-19 10:14:28.430922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.430947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-19 10:14:28.430963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.430985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-19 10:14:28.431000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.432530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-19 10:14:28.432576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.432610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.754 [2024-11-19 10:14:28.432635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.432659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.754 [2024-11-19 10:14:28.432686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.432709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.754 [2024-11-19 10:14:28.432725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.432747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.754 [2024-11-19 10:14:28.432762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.432783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.754 [2024-11-19 10:14:28.432798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.432820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.754 [2024-11-19 10:14:28.432835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.432857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.754 [2024-11-19 10:14:28.432873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.432911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.754 [2024-11-19 10:14:28.432950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.432975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.754 [2024-11-19 10:14:28.432992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.433013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.754 [2024-11-19 10:14:28.433029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.433050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.754 [2024-11-19 10:14:28.433065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.433087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.754 [2024-11-19 10:14:28.433113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.433137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.754 [2024-11-19 10:14:28.433153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:02.754 [2024-11-19 10:14:28.433175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.754 [2024-11-19 10:14:28.433191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:02.754 6921.00 IOPS, 27.04 MiB/s [2024-11-19T10:15:16.643Z] 7124.90 IOPS, 27.83 MiB/s [2024-11-19T10:15:16.643Z] 7285.09 IOPS, 28.46 MiB/s [2024-11-19T10:15:16.643Z] 7422.67 IOPS, 28.99 MiB/s [2024-11-19T10:15:16.643Z] 7538.54 IOPS, 29.45 MiB/s [2024-11-19T10:15:16.644Z] 7638.71 IOPS, 29.84 MiB/s [2024-11-19T10:15:16.644Z] [2024-11-19 10:14:35.014663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.014738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.014800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.014823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.014847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.014863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.014885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.014900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.014937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.014956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.014977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.014993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.015030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.015067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-19 10:14:35.015104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-19 10:14:35.015169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-19 10:14:35.015209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-19 10:14:35.015247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-19 10:14:35.015285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-19 10:14:35.015321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-19 10:14:35.015358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-19 10:14:35.015394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.015458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.015505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.015542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.015579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.015615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.015663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.015705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.015742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.015779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.015816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.015859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.015897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.015952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.015974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.015989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.016011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.016027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.016049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.755 [2024-11-19 10:14:35.016064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:02.755 [2024-11-19 10:14:35.016086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.756 [2024-11-19 10:14:35.016101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.016124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.756 [2024-11-19 10:14:35.016140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.016184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.756 [2024-11-19 10:14:35.016204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.016228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.756 [2024-11-19 10:14:35.016243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.016265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.756 [2024-11-19 10:14:35.016281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.016302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.756 [2024-11-19 10:14:35.016318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.016340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.756 [2024-11-19 10:14:35.016355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.016377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.756 [2024-11-19 10:14:35.016392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.016413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.756 [2024-11-19 10:14:35.016429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.016452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.756 [2024-11-19 10:14:35.016467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.016492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.756 [2024-11-19 10:14:35.016507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.016529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.756 [2024-11-19 10:14:35.016544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.016566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.756 [2024-11-19 10:14:35.016581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.016603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.756 [2024-11-19 10:14:35.016618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.016649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.756 [2024-11-19 10:14:35.016666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.016688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.756 [2024-11-19 10:14:35.016705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.016727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.756 [2024-11-19 10:14:35.016743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.016765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.756 [2024-11-19 10:14:35.016780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.016802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.756 [2024-11-19 10:14:35.016817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.016839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.756 [2024-11-19 10:14:35.016854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.016876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.756 [2024-11-19 10:14:35.016891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.016930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.756 [2024-11-19 10:14:35.016950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.016973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.756 [2024-11-19 10:14:35.016989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.017011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.756 [2024-11-19 10:14:35.017027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.017066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.756 [2024-11-19 10:14:35.017086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.017110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.756 [2024-11-19 10:14:35.017126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.017148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.756 [2024-11-19 10:14:35.017188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.017214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.756 [2024-11-19 10:14:35.017230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.017252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.756 [2024-11-19 10:14:35.017268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.017290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.756 [2024-11-19 10:14:35.017305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.017328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.756 [2024-11-19 10:14:35.017344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.017365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.756 [2024-11-19 10:14:35.017381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.017403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.756 [2024-11-19 10:14:35.017418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.017440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.756 [2024-11-19 10:14:35.017455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.017477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.756 [2024-11-19 10:14:35.017492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.017514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.756 [2024-11-19 10:14:35.017529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.017551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.756 [2024-11-19 10:14:35.017566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.017588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.756 [2024-11-19 10:14:35.017604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:02.756 [2024-11-19 10:14:35.017625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.756 [2024-11-19 10:14:35.017649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.017672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.757 [2024-11-19 10:14:35.017689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.017711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.757 [2024-11-19 10:14:35.017726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.017749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.757 [2024-11-19 10:14:35.017765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.017787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.757 [2024-11-19 10:14:35.017802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.017824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.757 [2024-11-19 10:14:35.017839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.017861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.757 [2024-11-19 10:14:35.017876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.017898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.757 [2024-11-19 10:14:35.017926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.017952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.757 [2024-11-19 10:14:35.017968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.017990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.757 [2024-11-19 10:14:35.018005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.757 [2024-11-19 10:14:35.018043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.757 [2024-11-19 10:14:35.018080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.757 [2024-11-19 10:14:35.018117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.757 [2024-11-19 10:14:35.018165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.757 [2024-11-19 10:14:35.018203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.757 [2024-11-19 10:14:35.018240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.757 [2024-11-19 10:14:35.018277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.757 [2024-11-19 10:14:35.018314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.757 [2024-11-19 10:14:35.018351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-19 10:14:35.018388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-19 10:14:35.018425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-19 10:14:35.018462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-19 10:14:35.018499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-19 10:14:35.018537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-19 10:14:35.018574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-19 10:14:35.018618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-19 10:14:35.018669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-19 10:14:35.018706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-19 10:14:35.018743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-19 10:14:35.018780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-19 10:14:35.018818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-19 10:14:35.018855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-19 10:14:35.018892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.018926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-19 10:14:35.018945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.019654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-19 10:14:35.019682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.019716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.757 [2024-11-19 10:14:35.019735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.019763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.757 [2024-11-19 10:14:35.019779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.019807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.757 [2024-11-19 10:14:35.019835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.019865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.757 [2024-11-19 10:14:35.019882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:02.757 [2024-11-19 10:14:35.019911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:35.019944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:35.019984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:35.020001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:35.020029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:35.020045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:35.020199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:35.020224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:35.020257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:35.020274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:35.020303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:35.020319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:35.020348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:35.020364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:35.020392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:35.020408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:35.020436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:35.020452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:35.020480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:35.020496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:35.020527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-19 10:14:35.020542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:35.020583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-19 10:14:35.020601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:35.020630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-19 10:14:35.020646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:35.020675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-19 10:14:35.020690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:35.020719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-19 10:14:35.020734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:35.020763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-19 10:14:35.020778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:35.020807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-19 10:14:35.020823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:35.020859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-19 10:14:35.020877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:35.020906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:35.020939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:02.758 7677.87 IOPS, 29.99 MiB/s [2024-11-19T10:15:16.647Z] 7239.56 IOPS, 28.28 MiB/s [2024-11-19T10:15:16.647Z] 7320.06 IOPS, 28.59 MiB/s [2024-11-19T10:15:16.647Z] 7393.78 IOPS, 28.88 MiB/s [2024-11-19T10:15:16.647Z] 7463.21 IOPS, 29.15 MiB/s [2024-11-19T10:15:16.647Z] 7528.65 IOPS, 29.41 MiB/s [2024-11-19T10:15:16.647Z] 7588.24 IOPS, 29.64 MiB/s [2024-11-19T10:15:16.647Z] [2024-11-19 10:14:42.091867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:42.091958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:42.092025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:42.092048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:42.092071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:42.092087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:42.092109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:42.092153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:42.092189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:42.092209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:42.092241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:42.092257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:42.092278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:42.092294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:42.092316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:42.092331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:42.092352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:42.092367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:42.092391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:42.092406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:42.092428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:42.092443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:42.092464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:42.092479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:42.092501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:42.092516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:42.092537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.758 [2024-11-19 10:14:42.092552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:42.092573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-19 10:14:42.092589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:42.092611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:33680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-19 10:14:42.092637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:42.092660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-19 10:14:42.092677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:42.092700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:33696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-19 10:14:42.092716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:02.758 [2024-11-19 10:14:42.092738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:33704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-19 10:14:42.092753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.092775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:33712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-19 10:14:42.092791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.092813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-19 10:14:42.092829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.092851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-19 10:14:42.092866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.092889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.759 [2024-11-19 10:14:42.092904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.092943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.759 [2024-11-19 10:14:42.092962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.092990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.759 [2024-11-19 10:14:42.093008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.759 [2024-11-19 10:14:42.093046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.759 [2024-11-19 10:14:42.093083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.759 [2024-11-19 10:14:42.093120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.759 [2024-11-19 10:14:42.093170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.759 [2024-11-19 10:14:42.093209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.759 [2024-11-19 10:14:42.093247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.759 [2024-11-19 10:14:42.093285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.759 [2024-11-19 10:14:42.093322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.759 [2024-11-19 10:14:42.093360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.759 [2024-11-19 10:14:42.093397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.759 [2024-11-19 10:14:42.093435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:34408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.759 [2024-11-19 10:14:42.093472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.759 [2024-11-19 10:14:42.093509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.759 [2024-11-19 10:14:42.093547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.759 [2024-11-19 10:14:42.093585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:33736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-19 10:14:42.093632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-19 10:14:42.093670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:33752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-19 10:14:42.093707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:33760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-19 10:14:42.093745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-19 10:14:42.093782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:33776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-19 10:14:42.093819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:33784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-19 10:14:42.093857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:33792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-19 10:14:42.093895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:33800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-19 10:14:42.093950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.093972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:33808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-19 10:14:42.093989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.094010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-19 10:14:42.094026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.094048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:33824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-19 10:14:42.094063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.094085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:33832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-19 10:14:42.094110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.094134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:33840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-19 10:14:42.094151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.094173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:33848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-19 10:14:42.094189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.094211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:33856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-19 10:14:42.094226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.094252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.759 [2024-11-19 10:14:42.094269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:02.759 [2024-11-19 10:14:42.094291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.759 [2024-11-19 10:14:42.094306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.094328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.760 [2024-11-19 10:14:42.094344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.094366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.760 [2024-11-19 10:14:42.094381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.094403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.760 [2024-11-19 10:14:42.094418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.094440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.760 [2024-11-19 10:14:42.094455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.094477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.760 [2024-11-19 10:14:42.094493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.094515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.760 [2024-11-19 10:14:42.094530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.094552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.760 [2024-11-19 10:14:42.094579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.094603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.760 [2024-11-19 10:14:42.094620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.094642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.760 [2024-11-19 10:14:42.094657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.094679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:34528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.760 [2024-11-19 10:14:42.094695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.094716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.760 [2024-11-19 10:14:42.094733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.094755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.760 [2024-11-19 10:14:42.094771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.094793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.760 [2024-11-19 10:14:42.094809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.094830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-19 10:14:42.094846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.094868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:33872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-19 10:14:42.094883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.094905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:33880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-19 10:14:42.094933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.094958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:33888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-19 10:14:42.094974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.094996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:33896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-19 10:14:42.095011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.095033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-19 10:14:42.095048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.095079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:33912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-19 10:14:42.095096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.095127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-19 10:14:42.095144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.095166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:33928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-19 10:14:42.095181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.095203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:33936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-19 10:14:42.095219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.095241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:33944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-19 10:14:42.095256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.095278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-19 10:14:42.095294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.095316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:33960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-19 10:14:42.095331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.095353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-19 10:14:42.095368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.095390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:33976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-19 10:14:42.095406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.095428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-19 10:14:42.095444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.095466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.760 [2024-11-19 10:14:42.095482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.095507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.760 [2024-11-19 10:14:42.095524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.095555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.760 [2024-11-19 10:14:42.095572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.095594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.760 [2024-11-19 10:14:42.095610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:02.760 [2024-11-19 10:14:42.095632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.760 [2024-11-19 10:14:42.095648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.095669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.761 [2024-11-19 10:14:42.095685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.095706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.761 [2024-11-19 10:14:42.095722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.095744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.761 [2024-11-19 10:14:42.095760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.095782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.761 [2024-11-19 10:14:42.095797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.095819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.095834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.095856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.095871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.095893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.095909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.095947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.095964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.095987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:34024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.096002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.096024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:34032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.096048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.096071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.096088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.096110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.096125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.096147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:34056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.096162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.096195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.096213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.096235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.096251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.096274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:34080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.096289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.096311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:34088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.096327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.096349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:34096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.096364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.096386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:34104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.096402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.096425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.096441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.096462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:34120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.096478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.096500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.096524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.096547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.096564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.096586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:34144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.096602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.096623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:34152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.096638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.096660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.096676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.096698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.096713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.097478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-19 10:14:42.097509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.097545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.761 [2024-11-19 10:14:42.097563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.097593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.761 [2024-11-19 10:14:42.097609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.097639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.761 [2024-11-19 10:14:42.097655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.097685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.761 [2024-11-19 10:14:42.097701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.097731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.761 [2024-11-19 10:14:42.097747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.097777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.761 [2024-11-19 10:14:42.097807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.097840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.761 [2024-11-19 10:14:42.097858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:02.761 [2024-11-19 10:14:42.097905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.761 [2024-11-19 10:14:42.097940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:02.761 7638.59 IOPS, 29.84 MiB/s [2024-11-19T10:15:16.650Z] 7306.48 IOPS, 28.54 MiB/s [2024-11-19T10:15:16.650Z] 7002.04 IOPS, 27.35 MiB/s [2024-11-19T10:15:16.650Z] 6721.96 IOPS, 26.26 MiB/s [2024-11-19T10:15:16.650Z] 6463.42 IOPS, 25.25 MiB/s [2024-11-19T10:15:16.650Z] 6224.04 IOPS, 24.31 MiB/s [2024-11-19T10:15:16.650Z] 6001.75 IOPS, 23.44 MiB/s [2024-11-19T10:15:16.650Z] 5795.62 IOPS, 22.64 MiB/s [2024-11-19T10:15:16.650Z] 5882.97 IOPS, 22.98 MiB/s [2024-11-19T10:15:16.650Z] 5965.71 IOPS, 23.30 MiB/s [2024-11-19T10:15:16.650Z] 6043.78 IOPS, 23.61 MiB/s [2024-11-19T10:15:16.650Z] 6123.18 IOPS, 23.92 MiB/s [2024-11-19T10:15:16.650Z] 6197.21 IOPS, 24.21 MiB/s [2024-11-19T10:15:16.650Z] 6265.17 IOPS, 24.47 MiB/s [2024-11-19T10:15:16.650Z] [2024-11-19 10:14:55.539815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.539877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.539989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.540013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.540053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.540089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.540126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.540162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.540229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.540266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-19 10:14:55.540325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-19 10:14:55.540367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-19 10:14:55.540405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-19 10:14:55.540442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-19 10:14:55.540486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-19 10:14:55.540552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-19 10:14:55.540587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-19 10:14:55.540622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.540693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.540722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.540750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.540777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.540803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.540845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.540873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.540901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.540928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.540955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.540985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.541000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.541015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.541028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.541042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.541055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.541069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.541082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.541096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.541109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.541123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.762 [2024-11-19 10:14:55.541136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.541151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-19 10:14:55.541164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.541178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-19 10:14:55.541200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.541215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-19 10:14:55.541229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.541244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-19 10:14:55.541257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.541288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-19 10:14:55.541302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.541317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-19 10:14:55.541331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-19 10:14:55.541346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-19 10:14:55.541359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.541374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-19 10:14:55.541388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.541403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.541416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.541431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.541444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.541459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.541472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.541487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.541500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.541515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.541528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.541545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.541559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.541581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.541596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.541611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.541624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.541639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.541652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.541668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.541681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.541696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.541709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.541724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.541737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.541752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.541766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.541781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.541794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.541808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.541822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.541838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.541852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.541866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.541880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.541894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.541908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.541922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.541947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.541988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.542004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.542019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-19 10:14:55.542033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.542048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-19 10:14:55.542062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.542077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-19 10:14:55.542091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.542106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-19 10:14:55.542119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.542135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-19 10:14:55.542148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.542164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-19 10:14:55.542177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.542192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-19 10:14:55.542206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.542221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-19 10:14:55.542235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.542251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.542265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.542280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.542294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.542325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.542338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.542368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.542388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.542404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.542417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.542432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.542445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.542459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.542472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.542486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.542499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.542514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.542526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-19 10:14:55.542541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.763 [2024-11-19 10:14:55.542553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.542568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.764 [2024-11-19 10:14:55.542580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.542595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.764 [2024-11-19 10:14:55.542608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.542622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-19 10:14:55.542635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.542649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-19 10:14:55.542662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.542676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-19 10:14:55.542689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.542703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-19 10:14:55.542716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.542737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-19 10:14:55.542752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.542767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-19 10:14:55.542780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.542795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-19 10:14:55.542809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.542823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-19 10:14:55.542836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.542851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.764 [2024-11-19 10:14:55.542864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.542878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.764 [2024-11-19 10:14:55.542892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.542906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.764 [2024-11-19 10:14:55.542919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.542933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.764 [2024-11-19 10:14:55.542946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.542972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.764 [2024-11-19 10:14:55.542988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.543003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.764 [2024-11-19 10:14:55.543016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.543030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.764 [2024-11-19 10:14:55.543043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.543057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.764 [2024-11-19 10:14:55.543070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.543084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.764 [2024-11-19 10:14:55.543105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.543121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.764 [2024-11-19 10:14:55.543135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.543150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.764 [2024-11-19 10:14:55.543163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.543177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.764 [2024-11-19 10:14:55.543190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.543205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.764 [2024-11-19 10:14:55.543218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.543232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.764 [2024-11-19 10:14:55.543245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.543259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.764 [2024-11-19 10:14:55.543272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.543286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.764 [2024-11-19 10:14:55.543299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.543313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-19 10:14:55.543326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.543340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-19 10:14:55.543353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.543367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-19 10:14:55.543379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.543393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-19 10:14:55.543406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.543420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-19 10:14:55.543433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.543447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-19 10:14:55.543466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.543481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-19 10:14:55.543495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.543509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-19 10:14:55.543521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-19 10:14:55.543535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-19 10:14:55.543548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-19 10:14:55.543562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-19 10:14:55.543574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-19 10:14:55.543588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-19 10:14:55.543617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-19 10:14:55.543631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-19 10:14:55.543645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-19 10:14:55.543659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-19 10:14:55.543672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-19 10:14:55.543687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-19 10:14:55.543700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-19 10:14:55.543715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-19 10:14:55.543729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-19 10:14:55.543743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ac290 is same with the state(6) to be set 00:21:02.765 [2024-11-19 10:14:55.543759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.765 [2024-11-19 10:14:55.543769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.765 [2024-11-19 10:14:55.543780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80960 len:8 PRP1 0x0 PRP2 0x0 00:21:02.765 [2024-11-19 10:14:55.543793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-19 10:14:55.543807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.765 [2024-11-19 10:14:55.543817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.765 [2024-11-19 10:14:55.543837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81544 len:8 PRP1 0x0 PRP2 0x0 00:21:02.765 [2024-11-19 10:14:55.543851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-19 10:14:55.543865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.765 [2024-11-19 10:14:55.543874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.765 [2024-11-19 10:14:55.543884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81552 len:8 PRP1 0x0 PRP2 0x0 00:21:02.765 [2024-11-19 10:14:55.543897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-19 10:14:55.543909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.765 [2024-11-19 10:14:55.543919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.765 [2024-11-19 10:14:55.543939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81560 len:8 PRP1 0x0 PRP2 0x0 00:21:02.765 [2024-11-19 10:14:55.543954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-19 10:14:55.543967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.765 [2024-11-19 10:14:55.543977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.765 [2024-11-19 10:14:55.543987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81568 len:8 PRP1 0x0 PRP2 0x0 00:21:02.765 [2024-11-19 10:14:55.544014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-19 10:14:55.544027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.765 [2024-11-19 10:14:55.544036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.765 [2024-11-19 10:14:55.544046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81576 len:8 PRP1 0x0 PRP2 0x0 00:21:02.765 [2024-11-19 10:14:55.544058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-19 10:14:55.544070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.765 [2024-11-19 10:14:55.544079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.765 [2024-11-19 10:14:55.544088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81584 len:8 PRP1 0x0 PRP2 0x0 00:21:02.765 [2024-11-19 10:14:55.544100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-19 10:14:55.544112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.765 [2024-11-19 10:14:55.544121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.765 [2024-11-19 10:14:55.544131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81592 len:8 PRP1 0x0 PRP2 0x0 00:21:02.765 [2024-11-19 10:14:55.544143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-19 10:14:55.544155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.765 [2024-11-19 10:14:55.544163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.765 [2024-11-19 10:14:55.544173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81600 len:8 PRP1 0x0 PRP2 0x0 00:21:02.765 [2024-11-19 10:14:55.544211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-19 10:14:55.545408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:02.765 [2024-11-19 10:14:55.545511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-19 10:14:55.545535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-19 10:14:55.545567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161d1d0 (9): Bad file descriptor 00:21:02.765 [2024-11-19 10:14:55.546042] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.765 [2024-11-19 10:14:55.546075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161d1d0 with addr=10.0.0.3, port=4421 00:21:02.765 [2024-11-19 10:14:55.546092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161d1d0 is same with the state(6) to be set 00:21:02.765 [2024-11-19 10:14:55.546166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161d1d0 (9): Bad file descriptor 00:21:02.765 [2024-11-19 10:14:55.546204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:21:02.765 [2024-11-19 10:14:55.546221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:21:02.765 [2024-11-19 10:14:55.546236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:02.765 [2024-11-19 10:14:55.546249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:21:02.765 [2024-11-19 10:14:55.546264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:02.765 6336.28 IOPS, 24.75 MiB/s [2024-11-19T10:15:16.654Z] 6399.41 IOPS, 25.00 MiB/s [2024-11-19T10:15:16.654Z] 6463.84 IOPS, 25.25 MiB/s [2024-11-19T10:15:16.654Z] 6523.74 IOPS, 25.48 MiB/s [2024-11-19T10:15:16.654Z] 6582.65 IOPS, 25.71 MiB/s [2024-11-19T10:15:16.654Z] 6640.24 IOPS, 25.94 MiB/s [2024-11-19T10:15:16.654Z] 6689.95 IOPS, 26.13 MiB/s [2024-11-19T10:15:16.654Z] 6737.63 IOPS, 26.32 MiB/s [2024-11-19T10:15:16.654Z] 6780.41 IOPS, 26.49 MiB/s [2024-11-19T10:15:16.654Z] 6822.44 IOPS, 26.65 MiB/s [2024-11-19T10:15:16.654Z] [2024-11-19 10:15:05.601246] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:21:02.765 6861.35 IOPS, 26.80 MiB/s [2024-11-19T10:15:16.654Z] 6899.36 IOPS, 26.95 MiB/s [2024-11-19T10:15:16.654Z] 6935.85 IOPS, 27.09 MiB/s [2024-11-19T10:15:16.654Z] 6970.41 IOPS, 27.23 MiB/s [2024-11-19T10:15:16.654Z] 7001.72 IOPS, 27.35 MiB/s [2024-11-19T10:15:16.654Z] 7033.65 IOPS, 27.48 MiB/s [2024-11-19T10:15:16.654Z] 7064.12 IOPS, 27.59 MiB/s [2024-11-19T10:15:16.654Z] 7094.00 IOPS, 27.71 MiB/s [2024-11-19T10:15:16.654Z] 7122.31 IOPS, 27.82 MiB/s [2024-11-19T10:15:16.654Z] 7150.22 IOPS, 27.93 MiB/s [2024-11-19T10:15:16.654Z] Received shutdown signal, test time was about 55.682130 seconds 00:21:02.765 00:21:02.765 Latency(us) 00:21:02.765 [2024-11-19T10:15:16.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.765 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:02.765 Verification LBA range: start 0x0 length 0x4000 00:21:02.765 Nvme0n1 : 55.68 7165.89 27.99 0.00 0.00 17831.09 826.65 7046430.72 00:21:02.765 [2024-11-19T10:15:16.654Z] =================================================================================================================== 00:21:02.765 [2024-11-19T10:15:16.655Z] Total : 7165.89 27.99 0.00 0.00 17831.09 826.65 7046430.72 00:21:02.766 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:02.766 rmmod nvme_tcp 00:21:02.766 rmmod nvme_fabrics 00:21:02.766 rmmod nvme_keyring 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80803 ']' 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80803 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80803 ']' 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80803 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80803 00:21:02.766 killing process with pid 80803 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80803' 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80803 00:21:02.766 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80803 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:21:03.025 00:21:03.025 real 1m2.433s 00:21:03.025 user 2m52.998s 00:21:03.025 sys 0m18.265s 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:03.025 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:03.025 ************************************ 00:21:03.025 END TEST nvmf_host_multipath 00:21:03.025 ************************************ 00:21:03.285 10:15:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:03.285 10:15:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:03.285 10:15:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:03.285 10:15:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.285 ************************************ 00:21:03.285 START TEST nvmf_timeout 00:21:03.285 ************************************ 00:21:03.285 10:15:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:03.285 * Looking for test storage... 00:21:03.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:03.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.285 --rc genhtml_branch_coverage=1 00:21:03.285 --rc genhtml_function_coverage=1 00:21:03.285 --rc genhtml_legend=1 00:21:03.285 --rc geninfo_all_blocks=1 00:21:03.285 --rc geninfo_unexecuted_blocks=1 00:21:03.285 00:21:03.285 ' 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:03.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.285 --rc genhtml_branch_coverage=1 00:21:03.285 --rc genhtml_function_coverage=1 00:21:03.285 --rc genhtml_legend=1 00:21:03.285 --rc geninfo_all_blocks=1 00:21:03.285 --rc geninfo_unexecuted_blocks=1 00:21:03.285 00:21:03.285 ' 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:03.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.285 --rc genhtml_branch_coverage=1 00:21:03.285 --rc genhtml_function_coverage=1 00:21:03.285 --rc genhtml_legend=1 00:21:03.285 --rc geninfo_all_blocks=1 00:21:03.285 --rc geninfo_unexecuted_blocks=1 00:21:03.285 00:21:03.285 ' 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:03.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.285 --rc genhtml_branch_coverage=1 00:21:03.285 --rc genhtml_function_coverage=1 00:21:03.285 --rc genhtml_legend=1 00:21:03.285 --rc geninfo_all_blocks=1 00:21:03.285 --rc geninfo_unexecuted_blocks=1 00:21:03.285 00:21:03.285 ' 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:03.285 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:03.286 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:03.286 Cannot find device "nvmf_init_br" 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:21:03.286 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:03.545 Cannot find device "nvmf_init_br2" 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:03.545 Cannot find device "nvmf_tgt_br" 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:03.545 Cannot find device "nvmf_tgt_br2" 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:03.545 Cannot find device "nvmf_init_br" 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:03.545 Cannot find device "nvmf_init_br2" 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:03.545 Cannot find device "nvmf_tgt_br" 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:03.545 Cannot find device "nvmf_tgt_br2" 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:03.545 Cannot find device "nvmf_br" 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:03.545 Cannot find device "nvmf_init_if" 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:03.545 Cannot find device "nvmf_init_if2" 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:03.545 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:03.545 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:03.545 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:03.804 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:03.804 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:03.804 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:03.804 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:03.804 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:03.804 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:03.804 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:03.804 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:03.804 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:03.804 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:03.804 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:03.804 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:03.804 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:03.804 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:03.804 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:03.804 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:03.804 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:21:03.804 00:21:03.805 --- 10.0.0.3 ping statistics --- 00:21:03.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.805 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:03.805 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:03.805 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:21:03.805 00:21:03.805 --- 10.0.0.4 ping statistics --- 00:21:03.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.805 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:03.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:03.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:21:03.805 00:21:03.805 --- 10.0.0.1 ping statistics --- 00:21:03.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.805 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:03.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:03.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:21:03.805 00:21:03.805 --- 10.0.0.2 ping statistics --- 00:21:03.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.805 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=82028 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 82028 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82028 ']' 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:03.805 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:03.805 [2024-11-19 10:15:17.610836] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:21:03.805 [2024-11-19 10:15:17.610908] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.064 [2024-11-19 10:15:17.753813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:04.064 [2024-11-19 10:15:17.810923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.064 [2024-11-19 10:15:17.810987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.064 [2024-11-19 10:15:17.811015] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.064 [2024-11-19 10:15:17.811039] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.064 [2024-11-19 10:15:17.811046] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.064 [2024-11-19 10:15:17.812270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.064 [2024-11-19 10:15:17.812281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.064 [2024-11-19 10:15:17.865802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:04.064 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:04.064 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:04.064 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:04.064 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:04.064 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:04.323 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.323 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:04.323 10:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:04.580 [2024-11-19 10:15:18.279226] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.580 10:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:04.838 Malloc0 00:21:04.838 10:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:05.097 10:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:05.664 10:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:05.664 [2024-11-19 10:15:19.482951] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:05.664 10:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82070 00:21:05.664 10:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:05.664 10:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82070 /var/tmp/bdevperf.sock 00:21:05.664 10:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82070 ']' 00:21:05.664 10:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.664 10:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:05.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.664 10:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.664 10:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:05.664 10:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:05.923 [2024-11-19 10:15:19.553960] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:21:05.923 [2024-11-19 10:15:19.554059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82070 ] 00:21:05.923 [2024-11-19 10:15:19.697254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.923 [2024-11-19 10:15:19.759082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.181 [2024-11-19 10:15:19.815676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:06.181 10:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.181 10:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:06.181 10:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:06.440 10:15:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:06.698 NVMe0n1 00:21:06.698 10:15:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82086 00:21:06.698 10:15:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:06.698 10:15:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:21:06.973 Running I/O for 10 seconds... 00:21:07.908 10:15:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:08.169 6805.00 IOPS, 26.58 MiB/s [2024-11-19T10:15:22.058Z] [2024-11-19 10:15:21.833224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.169 [2024-11-19 10:15:21.833304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-19 10:15:21.833318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.169 [2024-11-19 10:15:21.833328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-19 10:15:21.833338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.169 [2024-11-19 10:15:21.833348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-19 10:15:21.833358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.169 [2024-11-19 10:15:21.833367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-19 10:15:21.833377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe50 is same with the state(6) to be set 00:21:08.169 [2024-11-19 10:15:21.833966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-19 10:15:21.833994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-19 10:15:21.834017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.834027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.834039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.834049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.834060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.834069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.834080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.834090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.834101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.834110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.834121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.834131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.834143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.834538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.834564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.834575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.834586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.834596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.834608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.834618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.834629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.834638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.834649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.834658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.834669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.834678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.834689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.834806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.834824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.834834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.834845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.834855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.834991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.835231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.835248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.835257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.835268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.835277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.835398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.835413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.835496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.835508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.835520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.835529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.835540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.835628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.835644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.835653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.835664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.835674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.835751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.835764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.835775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.835784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.835795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.835804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.835815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.835824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.835835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.835957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.835973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.835983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.836331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.836355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.836368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.836378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.836391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.836400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.836411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.836421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.836432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.836441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.836452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.836461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.836472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.836481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.836806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.170 [2024-11-19 10:15:21.836822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-19 10:15:21.836834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.836843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.836854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.836864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.836875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.836884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.837002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.837016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.837027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.837161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.837300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.837395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.837410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.837421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.837434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.837443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.837574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.837590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.837693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.837712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.837725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.837734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.837840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.837858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.837870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.837879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.837982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.837994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.171 [2024-11-19 10:15:21.838520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-19 10:15:21.838532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.838542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.838553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.838563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.838574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.838584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.838595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.838604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.838616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.838625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.838637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.838646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.838657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.838667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.838678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.838688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.838699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.838709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.838720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.838729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.838741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.838751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.838762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.838772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.838783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.838793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.838804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.838813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.838834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.838844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.838856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.838871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.838882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.838892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.838903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.838924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.838937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.838947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.838958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.838968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.838980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.838989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.839000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.839010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.839021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.839031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.839042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.839052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.839063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.839073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.839084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.839094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.839105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.839114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.839125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.839135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.839146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.839155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.839166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.839176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.839192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.839202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.839213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.172 [2024-11-19 10:15:21.839224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.839235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-19 10:15:21.839245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.839257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-19 10:15:21.839266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.839278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-19 10:15:21.839287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.839299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-19 10:15:21.839309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.839321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-19 10:15:21.839330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.839342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-19 10:15:21.839351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.839363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-19 10:15:21.839372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.839384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-19 10:15:21.839393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-19 10:15:21.839404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-19 10:15:21.839413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-19 10:15:21.839425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-19 10:15:21.839434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-19 10:15:21.839445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-19 10:15:21.839454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-19 10:15:21.839465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-19 10:15:21.839475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-19 10:15:21.839487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-19 10:15:21.839496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-19 10:15:21.839507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-19 10:15:21.839516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-19 10:15:21.839533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-19 10:15:21.839543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-19 10:15:21.839554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.173 [2024-11-19 10:15:21.839565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-19 10:15:21.839576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d1d0 is same with the state(6) to be set 00:21:08.173 [2024-11-19 10:15:21.839588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:08.173 [2024-11-19 10:15:21.839596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:08.173 [2024-11-19 10:15:21.839605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66712 len:8 PRP1 0x0 PRP2 0x0 00:21:08.173 [2024-11-19 10:15:21.839615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-19 10:15:21.839934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:08.173 [2024-11-19 10:15:21.839961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffe50 (9): Bad file descriptor 00:21:08.173 [2024-11-19 10:15:21.840058] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.173 [2024-11-19 10:15:21.840081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffe50 with addr=10.0.0.3, port=4420 00:21:08.173 [2024-11-19 10:15:21.840093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe50 is same with the state(6) to be set 00:21:08.173 [2024-11-19 10:15:21.840112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffe50 (9): Bad file descriptor 00:21:08.173 [2024-11-19 10:15:21.840128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:08.173 [2024-11-19 10:15:21.840138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:08.173 [2024-11-19 10:15:21.840149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:08.173 [2024-11-19 10:15:21.840160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:08.173 [2024-11-19 10:15:21.840170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:08.173 10:15:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:21:10.044 4106.00 IOPS, 16.04 MiB/s [2024-11-19T10:15:23.933Z] 2737.33 IOPS, 10.69 MiB/s [2024-11-19T10:15:23.933Z] [2024-11-19 10:15:23.840512] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.044 [2024-11-19 10:15:23.840589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffe50 with addr=10.0.0.3, port=4420 00:21:10.044 [2024-11-19 10:15:23.840607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe50 is same with the state(6) to be set 00:21:10.044 [2024-11-19 10:15:23.840634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffe50 (9): Bad file descriptor 00:21:10.044 [2024-11-19 10:15:23.840666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:10.044 [2024-11-19 10:15:23.840677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:10.044 [2024-11-19 10:15:23.840688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:10.044 [2024-11-19 10:15:23.840701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:10.044 [2024-11-19 10:15:23.840713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:10.044 10:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:21:10.044 10:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:10.044 10:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:10.304 10:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:21:10.304 10:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:21:10.304 10:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:10.304 10:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:10.872 10:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:21:10.872 10:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:21:11.809 2053.00 IOPS, 8.02 MiB/s [2024-11-19T10:15:25.956Z] 1642.40 IOPS, 6.42 MiB/s [2024-11-19T10:15:25.956Z] [2024-11-19 10:15:25.840967] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.067 [2024-11-19 10:15:25.841021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffe50 with addr=10.0.0.3, port=4420 00:21:12.067 [2024-11-19 10:15:25.841038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe50 is same with the state(6) to be set 00:21:12.067 [2024-11-19 10:15:25.841064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffe50 (9): Bad file descriptor 00:21:12.067 [2024-11-19 10:15:25.841084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:12.067 [2024-11-19 10:15:25.841095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:12.067 [2024-11-19 10:15:25.841106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:12.067 [2024-11-19 10:15:25.841117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:12.067 [2024-11-19 10:15:25.841129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:13.940 1368.67 IOPS, 5.35 MiB/s [2024-11-19T10:15:28.088Z] 1173.14 IOPS, 4.58 MiB/s [2024-11-19T10:15:28.088Z] [2024-11-19 10:15:27.841342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:14.199 [2024-11-19 10:15:27.841433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:14.199 [2024-11-19 10:15:27.841446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:14.199 [2024-11-19 10:15:27.841457] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:21:14.199 [2024-11-19 10:15:27.841470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:15.025 1026.50 IOPS, 4.01 MiB/s 00:21:15.025 Latency(us) 00:21:15.025 [2024-11-19T10:15:28.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.026 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:15.026 Verification LBA range: start 0x0 length 0x4000 00:21:15.026 NVMe0n1 : 8.20 1001.19 3.91 15.61 0.00 125663.05 4021.53 7015926.69 00:21:15.026 [2024-11-19T10:15:28.915Z] =================================================================================================================== 00:21:15.026 [2024-11-19T10:15:28.915Z] Total : 1001.19 3.91 15.61 0.00 125663.05 4021.53 7015926.69 00:21:15.026 { 00:21:15.026 "results": [ 00:21:15.026 { 00:21:15.026 "job": "NVMe0n1", 00:21:15.026 "core_mask": "0x4", 00:21:15.026 "workload": "verify", 00:21:15.026 "status": "finished", 00:21:15.026 "verify_range": { 00:21:15.026 "start": 0, 00:21:15.026 "length": 16384 00:21:15.026 }, 00:21:15.026 "queue_depth": 128, 00:21:15.026 "io_size": 4096, 00:21:15.026 "runtime": 8.202244, 00:21:15.026 "iops": 1001.1894305997237, 00:21:15.026 "mibps": 3.9108962132801706, 00:21:15.026 "io_failed": 128, 00:21:15.026 "io_timeout": 0, 00:21:15.026 "avg_latency_us": 125663.04939001525, 00:21:15.026 "min_latency_us": 4021.5272727272727, 00:21:15.026 "max_latency_us": 7015926.69090909 00:21:15.026 } 00:21:15.026 ], 00:21:15.026 "core_count": 1 00:21:15.026 } 00:21:15.591 10:15:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:21:15.591 10:15:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:15.591 10:15:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:16.158 10:15:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:21:16.158 10:15:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:21:16.158 10:15:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:16.158 10:15:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:16.416 10:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:21:16.416 10:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82086 00:21:16.416 10:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82070 00:21:16.416 10:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82070 ']' 00:21:16.416 10:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82070 00:21:16.416 10:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:16.416 10:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.416 10:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82070 00:21:16.416 10:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:16.416 10:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:16.416 killing process with pid 82070 00:21:16.416 10:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82070' 00:21:16.416 10:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82070 00:21:16.416 10:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82070 00:21:16.416 Received shutdown signal, test time was about 9.452946 seconds 00:21:16.416 00:21:16.416 Latency(us) 00:21:16.416 [2024-11-19T10:15:30.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.416 [2024-11-19T10:15:30.305Z] =================================================================================================================== 00:21:16.417 [2024-11-19T10:15:30.306Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.417 10:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:16.983 [2024-11-19 10:15:30.594838] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:16.983 10:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82209 00:21:16.983 10:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:16.983 10:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82209 /var/tmp/bdevperf.sock 00:21:16.983 10:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82209 ']' 00:21:16.983 10:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.983 10:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.983 10:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.983 10:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.983 10:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:16.983 [2024-11-19 10:15:30.663805] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:21:16.983 [2024-11-19 10:15:30.663889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82209 ] 00:21:16.983 [2024-11-19 10:15:30.805709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.983 [2024-11-19 10:15:30.864617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.242 [2024-11-19 10:15:30.919283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:17.809 10:15:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.809 10:15:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:17.809 10:15:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:18.068 10:15:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:21:18.635 NVMe0n1 00:21:18.635 10:15:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82238 00:21:18.635 10:15:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:18.635 10:15:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:21:18.635 Running I/O for 10 seconds... 00:21:19.580 10:15:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:19.842 8448.00 IOPS, 33.00 MiB/s [2024-11-19T10:15:33.731Z] [2024-11-19 10:15:33.542501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e80a0 is same with the state(6) to be set 00:21:19.842 [2024-11-19 10:15:33.542567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e80a0 is same with the state(6) to be set 00:21:19.842 [2024-11-19 10:15:33.542578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e80a0 is same with the state(6) to be set 00:21:19.842 [2024-11-19 10:15:33.542971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.842 [2024-11-19 10:15:33.543001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.842 [2024-11-19 10:15:33.543025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.842 [2024-11-19 10:15:33.543036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.842 [2024-11-19 10:15:33.543048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.842 [2024-11-19 10:15:33.543058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.842 [2024-11-19 10:15:33.543070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.842 [2024-11-19 10:15:33.543080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.842 [2024-11-19 10:15:33.543093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.842 [2024-11-19 10:15:33.543103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.842 [2024-11-19 10:15:33.543115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.842 [2024-11-19 10:15:33.543125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.842 [2024-11-19 10:15:33.543137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.842 [2024-11-19 10:15:33.543146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.842 [2024-11-19 10:15:33.543158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.842 [2024-11-19 10:15:33.543168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.842 [2024-11-19 10:15:33.543180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.842 [2024-11-19 10:15:33.543190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.842 [2024-11-19 10:15:33.543201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.842 [2024-11-19 10:15:33.543211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.842 [2024-11-19 10:15:33.543223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.842 [2024-11-19 10:15:33.543232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.842 [2024-11-19 10:15:33.543244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.842 [2024-11-19 10:15:33.543257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.842 [2024-11-19 10:15:33.543269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.842 [2024-11-19 10:15:33.543279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.842 [2024-11-19 10:15:33.543291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.842 [2024-11-19 10:15:33.543301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.842 [2024-11-19 10:15:33.543313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.842 [2024-11-19 10:15:33.543323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.842 [2024-11-19 10:15:33.543335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.842 [2024-11-19 10:15:33.543345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.842 [2024-11-19 10:15:33.543358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.842 [2024-11-19 10:15:33.543368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.843 [2024-11-19 10:15:33.543389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.843 [2024-11-19 10:15:33.543411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.843 [2024-11-19 10:15:33.543433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.543454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.543476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.543497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.543519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.543541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.543563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.543584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.543606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.543628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.543651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.543673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.543695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.543717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.543739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.543761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.543782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.843 [2024-11-19 10:15:33.543805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.843 [2024-11-19 10:15:33.543826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.843 [2024-11-19 10:15:33.543848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.843 [2024-11-19 10:15:33.543870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.843 [2024-11-19 10:15:33.543892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.843 [2024-11-19 10:15:33.543924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.843 [2024-11-19 10:15:33.543949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.843 [2024-11-19 10:15:33.543971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.543983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.543992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.544004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.544015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.544027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.544037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.544049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.544059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.544072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.544082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.544096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.544106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.544118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.544127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.544139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.843 [2024-11-19 10:15:33.544149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.544160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.843 [2024-11-19 10:15:33.544170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.544182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.843 [2024-11-19 10:15:33.544192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.544215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.843 [2024-11-19 10:15:33.544226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.843 [2024-11-19 10:15:33.544238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.843 [2024-11-19 10:15:33.544253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.844 [2024-11-19 10:15:33.544733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.844 [2024-11-19 10:15:33.544755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.844 [2024-11-19 10:15:33.544777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.844 [2024-11-19 10:15:33.544798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.844 [2024-11-19 10:15:33.544820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.844 [2024-11-19 10:15:33.544842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.844 [2024-11-19 10:15:33.544864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.844 [2024-11-19 10:15:33.544885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.544986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.544998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.545007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.545019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.545029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.545041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.545050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.545062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.545072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.844 [2024-11-19 10:15:33.545084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.844 [2024-11-19 10:15:33.545094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.845 [2024-11-19 10:15:33.545115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.845 [2024-11-19 10:15:33.545137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.845 [2024-11-19 10:15:33.545160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.845 [2024-11-19 10:15:33.545182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.845 [2024-11-19 10:15:33.545203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.845 [2024-11-19 10:15:33.545225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.845 [2024-11-19 10:15:33.545248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.845 [2024-11-19 10:15:33.545269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.845 [2024-11-19 10:15:33.545291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.845 [2024-11-19 10:15:33.545318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.845 [2024-11-19 10:15:33.545340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.845 [2024-11-19 10:15:33.545361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.845 [2024-11-19 10:15:33.545383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.845 [2024-11-19 10:15:33.545404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175a1d0 is same with the state(6) to be set 00:21:19.845 [2024-11-19 10:15:33.545428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.845 [2024-11-19 10:15:33.545437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.845 [2024-11-19 10:15:33.545446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79480 len:8 PRP1 0x0 PRP2 0x0 00:21:19.845 [2024-11-19 10:15:33.545455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.845 [2024-11-19 10:15:33.545474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.845 [2024-11-19 10:15:33.545483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80000 len:8 PRP1 0x0 PRP2 0x0 00:21:19.845 [2024-11-19 10:15:33.545499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.845 [2024-11-19 10:15:33.545517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.845 [2024-11-19 10:15:33.545527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80008 len:8 PRP1 0x0 PRP2 0x0 00:21:19.845 [2024-11-19 10:15:33.545537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.845 [2024-11-19 10:15:33.545554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.845 [2024-11-19 10:15:33.545563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80016 len:8 PRP1 0x0 PRP2 0x0 00:21:19.845 [2024-11-19 10:15:33.545572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.845 [2024-11-19 10:15:33.545589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.845 [2024-11-19 10:15:33.545598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80024 len:8 PRP1 0x0 PRP2 0x0 00:21:19.845 [2024-11-19 10:15:33.545613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.845 [2024-11-19 10:15:33.545631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.845 [2024-11-19 10:15:33.545640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80032 len:8 PRP1 0x0 PRP2 0x0 00:21:19.845 [2024-11-19 10:15:33.545650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.845 [2024-11-19 10:15:33.545668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.845 [2024-11-19 10:15:33.545676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80040 len:8 PRP1 0x0 PRP2 0x0 00:21:19.845 [2024-11-19 10:15:33.545685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.845 [2024-11-19 10:15:33.545703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.845 [2024-11-19 10:15:33.545711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80048 len:8 PRP1 0x0 PRP2 0x0 00:21:19.845 [2024-11-19 10:15:33.545721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.845 [2024-11-19 10:15:33.545738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.845 [2024-11-19 10:15:33.545746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80056 len:8 PRP1 0x0 PRP2 0x0 00:21:19.845 [2024-11-19 10:15:33.545755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.845 [2024-11-19 10:15:33.545773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.845 [2024-11-19 10:15:33.545781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80064 len:8 PRP1 0x0 PRP2 0x0 00:21:19.845 [2024-11-19 10:15:33.545795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.845 [2024-11-19 10:15:33.545815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.845 [2024-11-19 10:15:33.545823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80072 len:8 PRP1 0x0 PRP2 0x0 00:21:19.845 [2024-11-19 10:15:33.545832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.845 [2024-11-19 10:15:33.545850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.845 [2024-11-19 10:15:33.545858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80080 len:8 PRP1 0x0 PRP2 0x0 00:21:19.845 [2024-11-19 10:15:33.545867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.845 [2024-11-19 10:15:33.545885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.845 [2024-11-19 10:15:33.545893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80088 len:8 PRP1 0x0 PRP2 0x0 00:21:19.845 [2024-11-19 10:15:33.545907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.845 [2024-11-19 10:15:33.545937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.845 [2024-11-19 10:15:33.545946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80096 len:8 PRP1 0x0 PRP2 0x0 00:21:19.845 [2024-11-19 10:15:33.545955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.845 [2024-11-19 10:15:33.545965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.845 [2024-11-19 10:15:33.545973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.845 [2024-11-19 10:15:33.545989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80104 len:8 PRP1 0x0 PRP2 0x0 00:21:19.846 [2024-11-19 10:15:33.545998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.846 [2024-11-19 10:15:33.546008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.846 [2024-11-19 10:15:33.546016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.846 [2024-11-19 10:15:33.546024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80112 len:8 PRP1 0x0 PRP2 0x0 00:21:19.846 [2024-11-19 10:15:33.546033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.846 [2024-11-19 10:15:33.546043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.846 [2024-11-19 10:15:33.546051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.846 [2024-11-19 10:15:33.546059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80120 len:8 PRP1 0x0 PRP2 0x0 00:21:19.846 [2024-11-19 10:15:33.546068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.846 [2024-11-19 10:15:33.546078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.846 [2024-11-19 10:15:33.546086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.846 [2024-11-19 10:15:33.546094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80128 len:8 PRP1 0x0 PRP2 0x0 00:21:19.846 [2024-11-19 10:15:33.546114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.846 [2024-11-19 10:15:33.546124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.846 [2024-11-19 10:15:33.546131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.846 [2024-11-19 10:15:33.546139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80136 len:8 PRP1 0x0 PRP2 0x0 00:21:19.846 [2024-11-19 10:15:33.546149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.846 [2024-11-19 10:15:33.546158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.846 [2024-11-19 10:15:33.546165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.846 [2024-11-19 10:15:33.546173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80144 len:8 PRP1 0x0 PRP2 0x0 00:21:19.846 [2024-11-19 10:15:33.546182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.846 [2024-11-19 10:15:33.546192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.846 [2024-11-19 10:15:33.546199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.846 [2024-11-19 10:15:33.546207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80152 len:8 PRP1 0x0 PRP2 0x0 00:21:19.846 [2024-11-19 10:15:33.546221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.846 [2024-11-19 10:15:33.547507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:19.846 [2024-11-19 10:15:33.547599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ece50 (9): Bad file descriptor 00:21:19.846 [2024-11-19 10:15:33.547706] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.846 [2024-11-19 10:15:33.547728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ece50 with addr=10.0.0.3, port=4420 00:21:19.846 [2024-11-19 10:15:33.547740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ece50 is same with the state(6) to be set 00:21:19.846 [2024-11-19 10:15:33.547758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ece50 (9): Bad file descriptor 00:21:19.846 [2024-11-19 10:15:33.547775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:21:19.846 [2024-11-19 10:15:33.547785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:21:19.846 [2024-11-19 10:15:33.547796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:19.846 [2024-11-19 10:15:33.547808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:21:19.846 [2024-11-19 10:15:33.559475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:19.846 10:15:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:21:20.781 4946.00 IOPS, 19.32 MiB/s [2024-11-19T10:15:34.670Z] [2024-11-19 10:15:34.559655] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.781 [2024-11-19 10:15:34.559714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ece50 with addr=10.0.0.3, port=4420 00:21:20.781 [2024-11-19 10:15:34.559731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ece50 is same with the state(6) to be set 00:21:20.781 [2024-11-19 10:15:34.559756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ece50 (9): Bad file descriptor 00:21:20.781 [2024-11-19 10:15:34.559775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:21:20.781 [2024-11-19 10:15:34.559787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:21:20.781 [2024-11-19 10:15:34.559798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:20.781 [2024-11-19 10:15:34.559811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:21:20.781 [2024-11-19 10:15:34.559823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:20.781 10:15:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:21.040 [2024-11-19 10:15:34.835144] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:21.040 10:15:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82238 00:21:21.867 3297.33 IOPS, 12.88 MiB/s [2024-11-19T10:15:35.756Z] [2024-11-19 10:15:35.575341] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:21:23.739 2473.00 IOPS, 9.66 MiB/s [2024-11-19T10:15:38.565Z] 3597.00 IOPS, 14.05 MiB/s [2024-11-19T10:15:39.500Z] 4578.83 IOPS, 17.89 MiB/s [2024-11-19T10:15:40.480Z] 5296.14 IOPS, 20.69 MiB/s [2024-11-19T10:15:41.417Z] 5826.50 IOPS, 22.76 MiB/s [2024-11-19T10:15:42.791Z] 6252.00 IOPS, 24.42 MiB/s [2024-11-19T10:15:42.791Z] 6597.20 IOPS, 25.77 MiB/s 00:21:28.902 Latency(us) 00:21:28.902 [2024-11-19T10:15:42.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.902 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:28.902 Verification LBA range: start 0x0 length 0x4000 00:21:28.902 NVMe0n1 : 10.01 6602.63 25.79 0.00 0.00 19344.59 1645.85 3019898.88 00:21:28.902 [2024-11-19T10:15:42.791Z] =================================================================================================================== 00:21:28.902 [2024-11-19T10:15:42.791Z] Total : 6602.63 25.79 0.00 0.00 19344.59 1645.85 3019898.88 00:21:28.902 { 00:21:28.902 "results": [ 00:21:28.902 { 00:21:28.902 "job": "NVMe0n1", 00:21:28.902 "core_mask": "0x4", 00:21:28.902 "workload": "verify", 00:21:28.902 "status": "finished", 00:21:28.902 "verify_range": { 00:21:28.902 "start": 0, 00:21:28.902 "length": 16384 00:21:28.902 }, 00:21:28.902 "queue_depth": 128, 00:21:28.902 "io_size": 4096, 00:21:28.902 "runtime": 10.011163, 00:21:28.902 "iops": 6602.629484706223, 00:21:28.902 "mibps": 25.791521424633682, 00:21:28.902 "io_failed": 0, 00:21:28.902 "io_timeout": 0, 00:21:28.902 "avg_latency_us": 19344.58598333104, 00:21:28.902 "min_latency_us": 1645.8472727272726, 00:21:28.902 "max_latency_us": 3019898.88 00:21:28.902 } 00:21:28.902 ], 00:21:28.902 "core_count": 1 00:21:28.902 } 00:21:28.902 10:15:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82343 00:21:28.902 10:15:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:28.902 10:15:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:21:28.902 Running I/O for 10 seconds... 00:21:29.840 10:15:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:29.840 7061.00 IOPS, 27.58 MiB/s [2024-11-19T10:15:43.729Z] [2024-11-19 10:15:43.661840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.840 [2024-11-19 10:15:43.661896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.840 [2024-11-19 10:15:43.661940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.840 [2024-11-19 10:15:43.661954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.840 [2024-11-19 10:15:43.661967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.840 [2024-11-19 10:15:43.661977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.840 [2024-11-19 10:15:43.661989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.840 [2024-11-19 10:15:43.661999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.840 [2024-11-19 10:15:43.662010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.840 [2024-11-19 10:15:43.662020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.840 [2024-11-19 10:15:43.662031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.840 [2024-11-19 10:15:43.662041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.840 [2024-11-19 10:15:43.662052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.840 [2024-11-19 10:15:43.662062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.840 [2024-11-19 10:15:43.662074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.840 [2024-11-19 10:15:43.662083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.840 [2024-11-19 10:15:43.662095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.840 [2024-11-19 10:15:43.662104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.840 [2024-11-19 10:15:43.662116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.840 [2024-11-19 10:15:43.662125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.840 [2024-11-19 10:15:43.662137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.840 [2024-11-19 10:15:43.662147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.840 [2024-11-19 10:15:43.662159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.840 [2024-11-19 10:15:43.662476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.662500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.662511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.662523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.662532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.662544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.662554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.662566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.662576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.662588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.662598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.662609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.662620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.662631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.662641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.662652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.663064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.663092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.663103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.663115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.663125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.663136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.663146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.663158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.663167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.663178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.663188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.663199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.663209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.663220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.663230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.663353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.663365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.663377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.663387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.663463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.663476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.663487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.663497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.663509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.663518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.663778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.663804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.663819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.663829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.663967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.663982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.664279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.664380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.664395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.664405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.664416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.664426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.664437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.664447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.664458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.664468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.664479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.664489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.664622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.664770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.664987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.665003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.665015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.665025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.665037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.665047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.665299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.665317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.665330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.665340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.665351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.665361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.665372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.665382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.665394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.665404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.665415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.841 [2024-11-19 10:15:43.665549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.841 [2024-11-19 10:15:43.665795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.842 [2024-11-19 10:15:43.665821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.665834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.842 [2024-11-19 10:15:43.665844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.665977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.842 [2024-11-19 10:15:43.665993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.666063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.842 [2024-11-19 10:15:43.666075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.666086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.842 [2024-11-19 10:15:43.666096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.666107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.842 [2024-11-19 10:15:43.666117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.666129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.842 [2024-11-19 10:15:43.666138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.666260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.842 [2024-11-19 10:15:43.666275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.666287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.842 [2024-11-19 10:15:43.666436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.666452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.842 [2024-11-19 10:15:43.666578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.666600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.842 [2024-11-19 10:15:43.666709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.666728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.842 [2024-11-19 10:15:43.666738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.666750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.666894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.667025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.667157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.667282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.667303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.667316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.667448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.667552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.667571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.667583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.667593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.667605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.667750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.667865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.667878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.667890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.668016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.668036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.668150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.668165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.668276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.668294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.668304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.668316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.668462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.668550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.668561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.668580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.668591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.668602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.842 [2024-11-19 10:15:43.668612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.668846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.842 [2024-11-19 10:15:43.668859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.668872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.668881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.668895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.668904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.668928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.668940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.668953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.668963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.668975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.668984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.668996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.669006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.669018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.669027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.669039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.842 [2024-11-19 10:15:43.669048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.669060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.669070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.669082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.842 [2024-11-19 10:15:43.669091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.842 [2024-11-19 10:15:43.669111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.843 [2024-11-19 10:15:43.669922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.669935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175b290 is same with the state(6) to be set 00:21:29.843 [2024-11-19 10:15:43.669948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.843 [2024-11-19 10:15:43.669957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.843 [2024-11-19 10:15:43.669965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64144 len:8 PRP1 0x0 PRP2 0x0 00:21:29.843 [2024-11-19 10:15:43.669976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.843 [2024-11-19 10:15:43.670266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:29.843 [2024-11-19 10:15:43.670347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ece50 (9): Bad file descriptor 00:21:29.844 [2024-11-19 10:15:43.670454] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:29.844 [2024-11-19 10:15:43.670475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ece50 with addr=10.0.0.3, port=4420 00:21:29.844 [2024-11-19 10:15:43.670487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ece50 is same with the state(6) to be set 00:21:29.844 [2024-11-19 10:15:43.670505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ece50 (9): Bad file descriptor 00:21:29.844 [2024-11-19 10:15:43.670522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:29.844 [2024-11-19 10:15:43.670532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:29.844 [2024-11-19 10:15:43.670544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:29.844 [2024-11-19 10:15:43.670555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:29.844 [2024-11-19 10:15:43.670566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:29.844 10:15:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:21:31.036 3978.50 IOPS, 15.54 MiB/s [2024-11-19T10:15:44.926Z] [2024-11-19 10:15:44.670712] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.037 [2024-11-19 10:15:44.670789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ece50 with addr=10.0.0.3, port=4420 00:21:31.037 [2024-11-19 10:15:44.670807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ece50 is same with the state(6) to be set 00:21:31.037 [2024-11-19 10:15:44.670834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ece50 (9): Bad file descriptor 00:21:31.037 [2024-11-19 10:15:44.670854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:31.037 [2024-11-19 10:15:44.670865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:31.037 [2024-11-19 10:15:44.670876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:31.037 [2024-11-19 10:15:44.670889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:31.037 [2024-11-19 10:15:44.670901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:31.972 2652.33 IOPS, 10.36 MiB/s [2024-11-19T10:15:45.861Z] [2024-11-19 10:15:45.671046] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.972 [2024-11-19 10:15:45.671107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ece50 with addr=10.0.0.3, port=4420 00:21:31.972 [2024-11-19 10:15:45.671124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ece50 is same with the state(6) to be set 00:21:31.972 [2024-11-19 10:15:45.671149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ece50 (9): Bad file descriptor 00:21:31.972 [2024-11-19 10:15:45.671169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:31.972 [2024-11-19 10:15:45.671179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:31.972 [2024-11-19 10:15:45.671191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:31.972 [2024-11-19 10:15:45.671203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:31.972 [2024-11-19 10:15:45.671214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:32.908 1989.25 IOPS, 7.77 MiB/s [2024-11-19T10:15:46.797Z] [2024-11-19 10:15:46.675071] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:32.908 [2024-11-19 10:15:46.675128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ece50 with addr=10.0.0.3, port=4420 00:21:32.908 [2024-11-19 10:15:46.675146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ece50 is same with the state(6) to be set 00:21:32.908 [2024-11-19 10:15:46.675609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ece50 (9): Bad file descriptor 00:21:32.908 [2024-11-19 10:15:46.676048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:32.908 [2024-11-19 10:15:46.676076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:32.908 [2024-11-19 10:15:46.676088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:32.908 [2024-11-19 10:15:46.676101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:32.908 [2024-11-19 10:15:46.676114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:32.908 10:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:33.166 [2024-11-19 10:15:46.959250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:33.166 10:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82343 00:21:33.991 1591.40 IOPS, 6.22 MiB/s [2024-11-19T10:15:47.880Z] [2024-11-19 10:15:47.700477] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:21:35.873 2630.00 IOPS, 10.27 MiB/s [2024-11-19T10:15:50.695Z] 3615.43 IOPS, 14.12 MiB/s [2024-11-19T10:15:51.630Z] 4368.50 IOPS, 17.06 MiB/s [2024-11-19T10:15:52.562Z] 4943.56 IOPS, 19.31 MiB/s [2024-11-19T10:15:52.562Z] 5402.00 IOPS, 21.10 MiB/s 00:21:38.673 Latency(us) 00:21:38.673 [2024-11-19T10:15:52.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.673 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:38.673 Verification LBA range: start 0x0 length 0x4000 00:21:38.673 NVMe0n1 : 10.01 5408.90 21.13 3632.38 0.00 14120.63 856.44 3019898.88 00:21:38.673 [2024-11-19T10:15:52.562Z] =================================================================================================================== 00:21:38.673 [2024-11-19T10:15:52.562Z] Total : 5408.90 21.13 3632.38 0.00 14120.63 0.00 3019898.88 00:21:38.673 { 00:21:38.673 "results": [ 00:21:38.673 { 00:21:38.673 "job": "NVMe0n1", 00:21:38.673 "core_mask": "0x4", 00:21:38.673 "workload": "verify", 00:21:38.673 "status": "finished", 00:21:38.673 "verify_range": { 00:21:38.673 "start": 0, 00:21:38.673 "length": 16384 00:21:38.673 }, 00:21:38.673 "queue_depth": 128, 00:21:38.673 "io_size": 4096, 00:21:38.673 "runtime": 10.009421, 00:21:38.673 "iops": 5408.904271286022, 00:21:38.673 "mibps": 21.128532309711023, 00:21:38.673 "io_failed": 36358, 00:21:38.673 "io_timeout": 0, 00:21:38.673 "avg_latency_us": 14120.634040892919, 00:21:38.673 "min_latency_us": 856.4363636363636, 00:21:38.673 "max_latency_us": 3019898.88 00:21:38.673 } 00:21:38.673 ], 00:21:38.673 "core_count": 1 00:21:38.673 } 00:21:38.931 10:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82209 00:21:38.931 10:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82209 ']' 00:21:38.931 10:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82209 00:21:38.931 10:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:38.931 10:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.931 10:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82209 00:21:38.931 killing process with pid 82209 00:21:38.931 Received shutdown signal, test time was about 10.000000 seconds 00:21:38.931 00:21:38.931 Latency(us) 00:21:38.931 [2024-11-19T10:15:52.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.931 [2024-11-19T10:15:52.820Z] =================================================================================================================== 00:21:38.931 [2024-11-19T10:15:52.820Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:38.931 10:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:38.931 10:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:38.931 10:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82209' 00:21:38.931 10:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82209 00:21:38.931 10:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82209 00:21:38.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:38.931 10:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82456 00:21:38.931 10:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:21:38.931 10:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82456 /var/tmp/bdevperf.sock 00:21:38.931 10:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82456 ']' 00:21:38.931 10:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:38.931 10:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.931 10:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:38.931 10:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.931 10:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:39.189 [2024-11-19 10:15:52.847769] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:21:39.189 [2024-11-19 10:15:52.847870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82456 ] 00:21:39.189 [2024-11-19 10:15:52.995658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.189 [2024-11-19 10:15:53.056240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.447 [2024-11-19 10:15:53.112075] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:40.012 10:15:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.012 10:15:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:40.012 10:15:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82468 00:21:40.012 10:15:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82456 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:21:40.012 10:15:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:21:40.578 10:15:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:40.835 NVMe0n1 00:21:40.835 10:15:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82515 00:21:40.835 10:15:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:40.835 10:15:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:21:40.835 Running I/O for 10 seconds... 00:21:41.778 10:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:42.083 15113.00 IOPS, 59.04 MiB/s [2024-11-19T10:15:55.972Z] [2024-11-19 10:15:55.796059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.083 [2024-11-19 10:15:55.796760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.796999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5c10 is same with the state(6) to be set 00:21:42.084 [2024-11-19 10:15:55.797653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.084 [2024-11-19 10:15:55.797755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.084 [2024-11-19 10:15:55.797790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.084 [2024-11-19 10:15:55.797801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.084 [2024-11-19 10:15:55.797812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.084 [2024-11-19 10:15:55.797823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.084 [2024-11-19 10:15:55.797835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.084 [2024-11-19 10:15:55.797965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.084 [2024-11-19 10:15:55.798281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.084 [2024-11-19 10:15:55.798350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.084 [2024-11-19 10:15:55.798366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.084 [2024-11-19 10:15:55.798375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.084 [2024-11-19 10:15:55.798386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.084 [2024-11-19 10:15:55.798395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.084 [2024-11-19 10:15:55.798407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.084 [2024-11-19 10:15:55.798417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.084 [2024-11-19 10:15:55.798428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:54680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.084 [2024-11-19 10:15:55.798437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.084 [2024-11-19 10:15:55.798448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.084 [2024-11-19 10:15:55.798457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.084 [2024-11-19 10:15:55.798467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.084 [2024-11-19 10:15:55.798583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.084 [2024-11-19 10:15:55.798600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.084 [2024-11-19 10:15:55.798609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.084 [2024-11-19 10:15:55.798620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.084 [2024-11-19 10:15:55.798629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.084 [2024-11-19 10:15:55.798870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.084 [2024-11-19 10:15:55.798896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.798910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.798935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.798947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.798956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.798968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.798978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.798990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.798999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.799010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.799019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.799030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.799143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.799159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.799169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.799179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.799189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.799201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.799446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.799463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.799472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.799484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.799493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.799504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.799514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.799525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.799534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.799545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.799554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.799638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.799651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.799662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.799671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.799682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.799691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.799702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.799711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.799959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.799984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.799999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.800009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.800020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.800030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.800042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.800050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.800061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:68008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.800070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.800081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.800090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.800173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.800186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.800198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.800217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.800229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.800238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.800249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.800514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.800530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.800539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.800550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.800560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.800571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.800580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.800591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.800600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.800611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.800620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.800631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.800713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.800729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.800739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.800750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.800759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.800770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:56576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.800779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.801047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.801159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.801172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.085 [2024-11-19 10:15:55.801182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.085 [2024-11-19 10:15:55.801194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.801324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.801338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.801347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.801433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.801446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.801457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.801466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.801477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.801614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.801633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.801744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.801760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.801889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.801906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.801989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.802006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.802016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.802027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.802036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.802047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.802056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.802066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.802178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.802193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.802203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.802214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.802336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.802349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.802358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.802475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.802487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.802627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.802640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.802744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.802754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.802766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.802777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.802788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.802924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.802941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.802951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.802962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.803059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.803075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.803084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.803095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.803104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.803115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.803177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.803192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.803201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.803212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.803220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.803232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.803241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.803253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.803262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.803376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.803391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.803404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.803413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.803557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.803631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.803647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:68312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.803656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.803667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.803677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.803688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.803697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.803707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.803825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.803839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.803848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.803985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:49664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.804051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.804064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.804074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.086 [2024-11-19 10:15:55.804085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.086 [2024-11-19 10:15:55.804094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.804105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.804114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.804125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.804133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.804144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.804153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.804404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.804423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.804436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.804446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.804546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.804560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.804572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.804581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.804592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.804855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.804871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.804880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.804891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.804900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.804911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.805169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.805189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.805199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.805211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.805220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.805351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.805363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.805374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.805384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.805499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.805511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.805522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.805586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.805602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.805612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.805623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.805632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.805643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.805652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.805663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.805673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.805780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.805795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.805807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.805817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.805827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.805837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.805965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.805979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.806102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.806113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.806124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.806133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.806144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.806154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.806395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.806410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.806423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.806432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.806529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.806542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.806554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.806564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.087 [2024-11-19 10:15:55.806575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.087 [2024-11-19 10:15:55.806584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.088 [2024-11-19 10:15:55.806595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.088 [2024-11-19 10:15:55.806604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.088 [2024-11-19 10:15:55.806704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2477090 is same with the state(6) to be set 00:21:42.088 [2024-11-19 10:15:55.806719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.088 [2024-11-19 10:15:55.806727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.088 [2024-11-19 10:15:55.806736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58520 len:8 PRP1 0x0 PRP2 0x0 00:21:42.088 [2024-11-19 10:15:55.806871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.088 [2024-11-19 10:15:55.807205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.088 [2024-11-19 10:15:55.807234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.088 [2024-11-19 10:15:55.807258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.088 [2024-11-19 10:15:55.807267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.088 [2024-11-19 10:15:55.807276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.088 [2024-11-19 10:15:55.807285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.088 [2024-11-19 10:15:55.807295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.088 [2024-11-19 10:15:55.807304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.088 [2024-11-19 10:15:55.807397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409e50 is same with the state(6) to be set 00:21:42.088 [2024-11-19 10:15:55.807803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:42.088 [2024-11-19 10:15:55.807838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2409e50 (9): Bad file descriptor 00:21:42.088 [2024-11-19 10:15:55.808130] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.088 [2024-11-19 10:15:55.808164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2409e50 with addr=10.0.0.3, port=4420 00:21:42.088 [2024-11-19 10:15:55.808176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409e50 is same with the state(6) to be set 00:21:42.088 [2024-11-19 10:15:55.808196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2409e50 (9): Bad file descriptor 00:21:42.088 [2024-11-19 10:15:55.808225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:42.088 [2024-11-19 10:15:55.808360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:42.088 [2024-11-19 10:15:55.808378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:42.088 [2024-11-19 10:15:55.808510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:42.088 [2024-11-19 10:15:55.808616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:42.088 10:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82515 00:21:43.957 8700.50 IOPS, 33.99 MiB/s [2024-11-19T10:15:57.846Z] 5800.33 IOPS, 22.66 MiB/s [2024-11-19T10:15:57.846Z] [2024-11-19 10:15:57.808842] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.957 [2024-11-19 10:15:57.808949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2409e50 with addr=10.0.0.3, port=4420 00:21:43.957 [2024-11-19 10:15:57.808967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409e50 is same with the state(6) to be set 00:21:43.957 [2024-11-19 10:15:57.808996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2409e50 (9): Bad file descriptor 00:21:43.957 [2024-11-19 10:15:57.809016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:43.957 [2024-11-19 10:15:57.809025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:43.957 [2024-11-19 10:15:57.809036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:43.957 [2024-11-19 10:15:57.809048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:43.957 [2024-11-19 10:15:57.809059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:45.826 4350.25 IOPS, 16.99 MiB/s [2024-11-19T10:15:59.973Z] 3480.20 IOPS, 13.59 MiB/s [2024-11-19T10:15:59.973Z] [2024-11-19 10:15:59.809254] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.084 [2024-11-19 10:15:59.809332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2409e50 with addr=10.0.0.3, port=4420 00:21:46.084 [2024-11-19 10:15:59.809350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409e50 is same with the state(6) to be set 00:21:46.084 [2024-11-19 10:15:59.809376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2409e50 (9): Bad file descriptor 00:21:46.084 [2024-11-19 10:15:59.809396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:46.084 [2024-11-19 10:15:59.809407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:46.084 [2024-11-19 10:15:59.809420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:46.084 [2024-11-19 10:15:59.809432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:46.084 [2024-11-19 10:15:59.809443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:47.974 2900.17 IOPS, 11.33 MiB/s [2024-11-19T10:16:01.863Z] 2485.86 IOPS, 9.71 MiB/s [2024-11-19T10:16:01.863Z] [2024-11-19 10:16:01.809520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:47.974 [2024-11-19 10:16:01.809589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:47.974 [2024-11-19 10:16:01.809603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:47.974 [2024-11-19 10:16:01.809614] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:21:47.974 [2024-11-19 10:16:01.809626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:49.168 2175.12 IOPS, 8.50 MiB/s 00:21:49.168 Latency(us) 00:21:49.168 [2024-11-19T10:16:03.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.168 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:21:49.168 NVMe0n1 : 8.17 2128.96 8.32 15.66 0.00 59747.53 7923.90 7046430.72 00:21:49.168 [2024-11-19T10:16:03.057Z] =================================================================================================================== 00:21:49.168 [2024-11-19T10:16:03.057Z] Total : 2128.96 8.32 15.66 0.00 59747.53 7923.90 7046430.72 00:21:49.168 { 00:21:49.168 "results": [ 00:21:49.168 { 00:21:49.168 "job": "NVMe0n1", 00:21:49.168 "core_mask": "0x4", 00:21:49.168 "workload": "randread", 00:21:49.168 "status": "finished", 00:21:49.168 "queue_depth": 128, 00:21:49.168 "io_size": 4096, 00:21:49.168 "runtime": 8.173468, 00:21:49.168 "iops": 2128.9616598486714, 00:21:49.168 "mibps": 8.316256483783873, 00:21:49.168 "io_failed": 128, 00:21:49.168 "io_timeout": 0, 00:21:49.168 "avg_latency_us": 59747.53217971258, 00:21:49.168 "min_latency_us": 7923.898181818182, 00:21:49.168 "max_latency_us": 7046430.72 00:21:49.168 } 00:21:49.168 ], 00:21:49.168 "core_count": 1 00:21:49.168 } 00:21:49.168 10:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:49.168 Attaching 5 probes... 00:21:49.168 1417.766495: reset bdev controller NVMe0 00:21:49.168 1417.848288: reconnect bdev controller NVMe0 00:21:49.168 3418.646003: reconnect delay bdev controller NVMe0 00:21:49.168 3418.687825: reconnect bdev controller NVMe0 00:21:49.168 5419.080752: reconnect delay bdev controller NVMe0 00:21:49.168 5419.105861: reconnect bdev controller NVMe0 00:21:49.168 7419.462496: reconnect delay bdev controller NVMe0 00:21:49.168 7419.485997: reconnect bdev controller NVMe0 00:21:49.168 10:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:21:49.168 10:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:21:49.168 10:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82468 00:21:49.168 10:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:49.168 10:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82456 00:21:49.168 10:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82456 ']' 00:21:49.168 10:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82456 00:21:49.168 10:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:49.168 10:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:49.168 10:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82456 00:21:49.168 killing process with pid 82456 00:21:49.168 Received shutdown signal, test time was about 8.234384 seconds 00:21:49.168 00:21:49.168 Latency(us) 00:21:49.168 [2024-11-19T10:16:03.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.168 [2024-11-19T10:16:03.057Z] =================================================================================================================== 00:21:49.168 [2024-11-19T10:16:03.057Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:49.168 10:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:49.168 10:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:49.168 10:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82456' 00:21:49.168 10:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82456 00:21:49.168 10:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82456 00:21:49.168 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:49.736 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:21:49.736 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:21:49.736 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:49.736 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:21:49.736 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:49.736 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:21:49.736 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:49.736 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:49.736 rmmod nvme_tcp 00:21:49.736 rmmod nvme_fabrics 00:21:49.736 rmmod nvme_keyring 00:21:49.736 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:49.736 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:21:49.736 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:21:49.736 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 82028 ']' 00:21:49.736 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 82028 00:21:49.736 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82028 ']' 00:21:49.736 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82028 00:21:49.736 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:49.736 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:49.736 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82028 00:21:49.736 killing process with pid 82028 00:21:49.736 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:49.737 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:49.737 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82028' 00:21:49.737 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82028 00:21:49.737 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82028 00:21:49.996 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:49.996 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:49.996 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:49.996 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:21:49.996 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:49.996 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:21:49.996 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:21:49.996 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:49.996 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:49.996 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:49.996 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:49.996 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:49.996 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:49.996 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:49.996 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:49.996 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:49.996 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:49.996 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:49.996 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:49.996 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:49.996 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:50.254 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:50.254 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:50.254 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.254 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.254 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.254 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:21:50.254 ************************************ 00:21:50.254 END TEST nvmf_timeout 00:21:50.254 ************************************ 00:21:50.254 00:21:50.254 real 0m47.009s 00:21:50.254 user 2m17.938s 00:21:50.254 sys 0m5.788s 00:21:50.254 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:50.254 10:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:50.254 10:16:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:21:50.254 10:16:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:50.254 00:21:50.254 real 5m10.865s 00:21:50.254 user 13m31.616s 00:21:50.254 sys 1m10.116s 00:21:50.254 10:16:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:50.254 ************************************ 00:21:50.254 END TEST nvmf_host 00:21:50.254 ************************************ 00:21:50.254 10:16:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.254 10:16:04 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:21:50.254 10:16:04 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:21:50.254 00:21:50.254 real 13m3.060s 00:21:50.254 user 31m27.532s 00:21:50.254 sys 3m12.676s 00:21:50.254 10:16:04 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:50.254 10:16:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:50.254 ************************************ 00:21:50.254 END TEST nvmf_tcp 00:21:50.254 ************************************ 00:21:50.254 10:16:04 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:21:50.254 10:16:04 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:21:50.254 10:16:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:50.254 10:16:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:50.254 10:16:04 -- common/autotest_common.sh@10 -- # set +x 00:21:50.254 ************************************ 00:21:50.254 START TEST nvmf_dif 00:21:50.254 ************************************ 00:21:50.254 10:16:04 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:21:50.513 * Looking for test storage... 00:21:50.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:50.513 10:16:04 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:50.513 10:16:04 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:21:50.513 10:16:04 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:50.513 10:16:04 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:50.513 10:16:04 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:50.513 10:16:04 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:50.513 10:16:04 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:50.513 10:16:04 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:21:50.513 10:16:04 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:21:50.513 10:16:04 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:21:50.513 10:16:04 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:21:50.513 10:16:04 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:21:50.513 10:16:04 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:21:50.513 10:16:04 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:21:50.513 10:16:04 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:50.513 10:16:04 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:21:50.513 10:16:04 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:21:50.513 10:16:04 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:50.513 10:16:04 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:50.514 10:16:04 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:21:50.514 10:16:04 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:21:50.514 10:16:04 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:50.514 10:16:04 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:21:50.514 10:16:04 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:21:50.514 10:16:04 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:21:50.514 10:16:04 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:21:50.514 10:16:04 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:50.514 10:16:04 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:21:50.514 10:16:04 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:21:50.514 10:16:04 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:50.514 10:16:04 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:50.514 10:16:04 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:21:50.514 10:16:04 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:50.514 10:16:04 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:50.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.514 --rc genhtml_branch_coverage=1 00:21:50.514 --rc genhtml_function_coverage=1 00:21:50.514 --rc genhtml_legend=1 00:21:50.514 --rc geninfo_all_blocks=1 00:21:50.514 --rc geninfo_unexecuted_blocks=1 00:21:50.514 00:21:50.514 ' 00:21:50.514 10:16:04 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:50.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.514 --rc genhtml_branch_coverage=1 00:21:50.514 --rc genhtml_function_coverage=1 00:21:50.514 --rc genhtml_legend=1 00:21:50.514 --rc geninfo_all_blocks=1 00:21:50.514 --rc geninfo_unexecuted_blocks=1 00:21:50.514 00:21:50.514 ' 00:21:50.514 10:16:04 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:50.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.514 --rc genhtml_branch_coverage=1 00:21:50.514 --rc genhtml_function_coverage=1 00:21:50.514 --rc genhtml_legend=1 00:21:50.514 --rc geninfo_all_blocks=1 00:21:50.514 --rc geninfo_unexecuted_blocks=1 00:21:50.514 00:21:50.514 ' 00:21:50.514 10:16:04 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:50.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.514 --rc genhtml_branch_coverage=1 00:21:50.514 --rc genhtml_function_coverage=1 00:21:50.514 --rc genhtml_legend=1 00:21:50.514 --rc geninfo_all_blocks=1 00:21:50.514 --rc geninfo_unexecuted_blocks=1 00:21:50.514 00:21:50.514 ' 00:21:50.514 10:16:04 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:50.514 10:16:04 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:21:50.514 10:16:04 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:50.514 10:16:04 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:50.514 10:16:04 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:50.514 10:16:04 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.514 10:16:04 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.514 10:16:04 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.514 10:16:04 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:21:50.514 10:16:04 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:50.514 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:50.514 10:16:04 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:21:50.514 10:16:04 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:21:50.514 10:16:04 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:21:50.514 10:16:04 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:21:50.514 10:16:04 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.514 10:16:04 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:50.514 10:16:04 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:50.514 10:16:04 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:50.515 Cannot find device "nvmf_init_br" 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@162 -- # true 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:50.515 Cannot find device "nvmf_init_br2" 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@163 -- # true 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:50.515 Cannot find device "nvmf_tgt_br" 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@164 -- # true 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:50.515 Cannot find device "nvmf_tgt_br2" 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@165 -- # true 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:50.515 Cannot find device "nvmf_init_br" 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@166 -- # true 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:50.515 Cannot find device "nvmf_init_br2" 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@167 -- # true 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:50.515 Cannot find device "nvmf_tgt_br" 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@168 -- # true 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:50.515 Cannot find device "nvmf_tgt_br2" 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@169 -- # true 00:21:50.515 10:16:04 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:50.774 Cannot find device "nvmf_br" 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@170 -- # true 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:50.774 Cannot find device "nvmf_init_if" 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@171 -- # true 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:50.774 Cannot find device "nvmf_init_if2" 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@172 -- # true 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:50.774 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@173 -- # true 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:50.774 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@174 -- # true 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:50.774 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:50.774 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:21:50.774 00:21:50.774 --- 10.0.0.3 ping statistics --- 00:21:50.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.774 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:50.774 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:50.774 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:21:50.774 00:21:50.774 --- 10.0.0.4 ping statistics --- 00:21:50.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.774 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:50.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:50.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:21:50.774 00:21:50.774 --- 10.0.0.1 ping statistics --- 00:21:50.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.774 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:50.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:50.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:21:50.774 00:21:50.774 --- 10.0.0.2 ping statistics --- 00:21:50.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.774 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:21:50.774 10:16:04 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:51.341 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:51.341 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:51.341 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:51.341 10:16:05 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:51.341 10:16:05 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:51.341 10:16:05 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:51.341 10:16:05 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:51.341 10:16:05 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:51.341 10:16:05 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:51.342 10:16:05 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:21:51.342 10:16:05 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:21:51.342 10:16:05 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:51.342 10:16:05 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:51.342 10:16:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:51.342 10:16:05 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=83009 00:21:51.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.342 10:16:05 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 83009 00:21:51.342 10:16:05 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:51.342 10:16:05 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 83009 ']' 00:21:51.342 10:16:05 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.342 10:16:05 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.342 10:16:05 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.342 10:16:05 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.342 10:16:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:51.342 [2024-11-19 10:16:05.114626] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:21:51.342 [2024-11-19 10:16:05.115017] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.600 [2024-11-19 10:16:05.272704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.600 [2024-11-19 10:16:05.351441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.600 [2024-11-19 10:16:05.351692] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.600 [2024-11-19 10:16:05.351855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.600 [2024-11-19 10:16:05.351911] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.600 [2024-11-19 10:16:05.352050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.600 [2024-11-19 10:16:05.352510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.600 [2024-11-19 10:16:05.409267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:51.600 10:16:05 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.600 10:16:05 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:21:51.600 10:16:05 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:51.600 10:16:05 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:51.600 10:16:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:51.860 10:16:05 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:51.860 10:16:05 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:21:51.860 10:16:05 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:21:51.860 10:16:05 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.860 10:16:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:51.860 [2024-11-19 10:16:05.524583] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.860 10:16:05 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.860 10:16:05 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:21:51.860 10:16:05 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:51.860 10:16:05 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:51.860 10:16:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:51.860 ************************************ 00:21:51.860 START TEST fio_dif_1_default 00:21:51.860 ************************************ 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:51.860 bdev_null0 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:51.860 [2024-11-19 10:16:05.573387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:51.860 { 00:21:51.860 "params": { 00:21:51.860 "name": "Nvme$subsystem", 00:21:51.860 "trtype": "$TEST_TRANSPORT", 00:21:51.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.860 "adrfam": "ipv4", 00:21:51.860 "trsvcid": "$NVMF_PORT", 00:21:51.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.860 "hdgst": ${hdgst:-false}, 00:21:51.860 "ddgst": ${ddgst:-false} 00:21:51.860 }, 00:21:51.860 "method": "bdev_nvme_attach_controller" 00:21:51.860 } 00:21:51.860 EOF 00:21:51.860 )") 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:51.860 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:21:51.861 10:16:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:21:51.861 10:16:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:21:51.861 10:16:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:21:51.861 10:16:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:51.861 "params": { 00:21:51.861 "name": "Nvme0", 00:21:51.861 "trtype": "tcp", 00:21:51.861 "traddr": "10.0.0.3", 00:21:51.861 "adrfam": "ipv4", 00:21:51.861 "trsvcid": "4420", 00:21:51.861 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:51.861 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:51.861 "hdgst": false, 00:21:51.861 "ddgst": false 00:21:51.861 }, 00:21:51.861 "method": "bdev_nvme_attach_controller" 00:21:51.861 }' 00:21:51.861 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:51.861 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:51.861 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:51.861 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:51.861 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:51.861 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:51.861 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:51.861 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:51.861 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:51.861 10:16:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:52.120 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:52.120 fio-3.35 00:21:52.120 Starting 1 thread 00:22:04.340 00:22:04.340 filename0: (groupid=0, jobs=1): err= 0: pid=83068: Tue Nov 19 10:16:16 2024 00:22:04.340 read: IOPS=8655, BW=33.8MiB/s (35.5MB/s)(338MiB/10001msec) 00:22:04.340 slat (usec): min=6, max=929, avg= 8.55, stdev= 4.12 00:22:04.340 clat (usec): min=348, max=3238, avg=437.34, stdev=31.26 00:22:04.340 lat (usec): min=355, max=3280, avg=445.89, stdev=31.95 00:22:04.340 clat percentiles (usec): 00:22:04.340 | 1.00th=[ 383], 5.00th=[ 404], 10.00th=[ 412], 20.00th=[ 420], 00:22:04.340 | 30.00th=[ 424], 40.00th=[ 429], 50.00th=[ 437], 60.00th=[ 441], 00:22:04.340 | 70.00th=[ 449], 80.00th=[ 453], 90.00th=[ 465], 95.00th=[ 478], 00:22:04.340 | 99.00th=[ 502], 99.50th=[ 515], 99.90th=[ 545], 99.95th=[ 562], 00:22:04.340 | 99.99th=[ 717] 00:22:04.340 bw ( KiB/s): min=33952, max=35328, per=100.00%, avg=34632.42, stdev=308.75, samples=19 00:22:04.340 iops : min= 8488, max= 8832, avg=8658.11, stdev=77.19, samples=19 00:22:04.340 lat (usec) : 500=98.87%, 750=1.13% 00:22:04.340 lat (msec) : 2=0.01%, 4=0.01% 00:22:04.340 cpu : usr=84.60%, sys=13.59%, ctx=19, majf=0, minf=9 00:22:04.340 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:04.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:04.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:04.340 issued rwts: total=86564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:04.340 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:04.340 00:22:04.340 Run status group 0 (all jobs): 00:22:04.340 READ: bw=33.8MiB/s (35.5MB/s), 33.8MiB/s-33.8MiB/s (35.5MB/s-35.5MB/s), io=338MiB (355MB), run=10001-10001msec 00:22:04.340 10:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:22:04.340 10:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:22:04.340 10:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:22:04.340 10:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:04.340 10:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:22:04.340 10:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:04.340 10:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.340 10:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:04.340 10:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.340 10:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:04.340 10:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.340 10:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:04.340 ************************************ 00:22:04.340 END TEST fio_dif_1_default 00:22:04.340 ************************************ 00:22:04.340 10:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.340 00:22:04.340 real 0m11.080s 00:22:04.340 user 0m9.149s 00:22:04.340 sys 0m1.647s 00:22:04.340 10:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:04.340 10:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:04.340 10:16:16 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:22:04.340 10:16:16 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:04.340 10:16:16 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:04.340 10:16:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:04.340 ************************************ 00:22:04.340 START TEST fio_dif_1_multi_subsystems 00:22:04.340 ************************************ 00:22:04.340 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:22:04.340 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:22:04.340 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:22:04.340 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:22:04.340 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:04.340 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:22:04.340 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:22:04.340 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:04.341 bdev_null0 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:04.341 [2024-11-19 10:16:16.708203] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:04.341 bdev_null1 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:04.341 { 00:22:04.341 "params": { 00:22:04.341 "name": "Nvme$subsystem", 00:22:04.341 "trtype": "$TEST_TRANSPORT", 00:22:04.341 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.341 "adrfam": "ipv4", 00:22:04.341 "trsvcid": "$NVMF_PORT", 00:22:04.341 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.341 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.341 "hdgst": ${hdgst:-false}, 00:22:04.341 "ddgst": ${ddgst:-false} 00:22:04.341 }, 00:22:04.341 "method": "bdev_nvme_attach_controller" 00:22:04.341 } 00:22:04.341 EOF 00:22:04.341 )") 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:04.341 { 00:22:04.341 "params": { 00:22:04.341 "name": "Nvme$subsystem", 00:22:04.341 "trtype": "$TEST_TRANSPORT", 00:22:04.341 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.341 "adrfam": "ipv4", 00:22:04.341 "trsvcid": "$NVMF_PORT", 00:22:04.341 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.341 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.341 "hdgst": ${hdgst:-false}, 00:22:04.341 "ddgst": ${ddgst:-false} 00:22:04.341 }, 00:22:04.341 "method": "bdev_nvme_attach_controller" 00:22:04.341 } 00:22:04.341 EOF 00:22:04.341 )") 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:04.341 "params": { 00:22:04.341 "name": "Nvme0", 00:22:04.341 "trtype": "tcp", 00:22:04.341 "traddr": "10.0.0.3", 00:22:04.341 "adrfam": "ipv4", 00:22:04.341 "trsvcid": "4420", 00:22:04.341 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:04.341 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:04.341 "hdgst": false, 00:22:04.341 "ddgst": false 00:22:04.341 }, 00:22:04.341 "method": "bdev_nvme_attach_controller" 00:22:04.341 },{ 00:22:04.341 "params": { 00:22:04.341 "name": "Nvme1", 00:22:04.341 "trtype": "tcp", 00:22:04.341 "traddr": "10.0.0.3", 00:22:04.341 "adrfam": "ipv4", 00:22:04.341 "trsvcid": "4420", 00:22:04.341 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.341 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:04.341 "hdgst": false, 00:22:04.341 "ddgst": false 00:22:04.341 }, 00:22:04.341 "method": "bdev_nvme_attach_controller" 00:22:04.341 }' 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:04.341 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:04.342 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:04.342 10:16:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:04.342 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:04.342 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:04.342 fio-3.35 00:22:04.342 Starting 2 threads 00:22:14.317 00:22:14.317 filename0: (groupid=0, jobs=1): err= 0: pid=83229: Tue Nov 19 10:16:27 2024 00:22:14.317 read: IOPS=4717, BW=18.4MiB/s (19.3MB/s)(184MiB/10001msec) 00:22:14.317 slat (nsec): min=6636, max=68110, avg=13767.20, stdev=4558.35 00:22:14.317 clat (usec): min=665, max=2265, avg=809.43, stdev=39.87 00:22:14.317 lat (usec): min=679, max=2279, avg=823.19, stdev=40.46 00:22:14.317 clat percentiles (usec): 00:22:14.317 | 1.00th=[ 725], 5.00th=[ 758], 10.00th=[ 766], 20.00th=[ 783], 00:22:14.317 | 30.00th=[ 791], 40.00th=[ 799], 50.00th=[ 807], 60.00th=[ 816], 00:22:14.317 | 70.00th=[ 824], 80.00th=[ 840], 90.00th=[ 857], 95.00th=[ 873], 00:22:14.317 | 99.00th=[ 914], 99.50th=[ 922], 99.90th=[ 955], 99.95th=[ 963], 00:22:14.317 | 99.99th=[ 1352] 00:22:14.317 bw ( KiB/s): min=18624, max=19232, per=50.04%, avg=18886.74, stdev=168.51, samples=19 00:22:14.317 iops : min= 4656, max= 4808, avg=4721.68, stdev=42.13, samples=19 00:22:14.317 lat (usec) : 750=4.01%, 1000=95.96% 00:22:14.317 lat (msec) : 2=0.02%, 4=0.01% 00:22:14.317 cpu : usr=89.61%, sys=8.93%, ctx=7, majf=0, minf=0 00:22:14.317 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:14.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.317 issued rwts: total=47184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.317 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:14.317 filename1: (groupid=0, jobs=1): err= 0: pid=83230: Tue Nov 19 10:16:27 2024 00:22:14.317 read: IOPS=4717, BW=18.4MiB/s (19.3MB/s)(184MiB/10001msec) 00:22:14.317 slat (nsec): min=6837, max=56597, avg=13414.93, stdev=4275.74 00:22:14.317 clat (usec): min=631, max=2272, avg=811.54, stdev=49.36 00:22:14.317 lat (usec): min=638, max=2284, avg=824.96, stdev=50.59 00:22:14.317 clat percentiles (usec): 00:22:14.317 | 1.00th=[ 701], 5.00th=[ 725], 10.00th=[ 750], 20.00th=[ 775], 00:22:14.317 | 30.00th=[ 791], 40.00th=[ 799], 50.00th=[ 816], 60.00th=[ 824], 00:22:14.317 | 70.00th=[ 832], 80.00th=[ 848], 90.00th=[ 873], 95.00th=[ 889], 00:22:14.317 | 99.00th=[ 930], 99.50th=[ 938], 99.90th=[ 971], 99.95th=[ 979], 00:22:14.317 | 99.99th=[ 1418] 00:22:14.317 bw ( KiB/s): min=18624, max=19232, per=50.04%, avg=18886.74, stdev=168.51, samples=19 00:22:14.317 iops : min= 4656, max= 4808, avg=4721.68, stdev=42.13, samples=19 00:22:14.317 lat (usec) : 750=10.62%, 1000=89.35% 00:22:14.317 lat (msec) : 2=0.02%, 4=0.01% 00:22:14.317 cpu : usr=89.31%, sys=9.34%, ctx=14, majf=0, minf=0 00:22:14.317 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:14.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.317 issued rwts: total=47184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.317 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:14.317 00:22:14.317 Run status group 0 (all jobs): 00:22:14.317 READ: bw=36.9MiB/s (38.6MB/s), 18.4MiB/s-18.4MiB/s (19.3MB/s-19.3MB/s), io=369MiB (387MB), run=10001-10001msec 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:14.317 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.317 ************************************ 00:22:14.317 END TEST fio_dif_1_multi_subsystems 00:22:14.317 ************************************ 00:22:14.317 00:22:14.317 real 0m11.199s 00:22:14.318 user 0m18.709s 00:22:14.318 sys 0m2.154s 00:22:14.318 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:14.318 10:16:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:14.318 10:16:27 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:22:14.318 10:16:27 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:14.318 10:16:27 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:14.318 10:16:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:14.318 ************************************ 00:22:14.318 START TEST fio_dif_rand_params 00:22:14.318 ************************************ 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:14.318 bdev_null0 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:14.318 [2024-11-19 10:16:27.961993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:14.318 { 00:22:14.318 "params": { 00:22:14.318 "name": "Nvme$subsystem", 00:22:14.318 "trtype": "$TEST_TRANSPORT", 00:22:14.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.318 "adrfam": "ipv4", 00:22:14.318 "trsvcid": "$NVMF_PORT", 00:22:14.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.318 "hdgst": ${hdgst:-false}, 00:22:14.318 "ddgst": ${ddgst:-false} 00:22:14.318 }, 00:22:14.318 "method": "bdev_nvme_attach_controller" 00:22:14.318 } 00:22:14.318 EOF 00:22:14.318 )") 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:22:14.318 10:16:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:14.318 "params": { 00:22:14.318 "name": "Nvme0", 00:22:14.318 "trtype": "tcp", 00:22:14.318 "traddr": "10.0.0.3", 00:22:14.318 "adrfam": "ipv4", 00:22:14.318 "trsvcid": "4420", 00:22:14.318 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:14.318 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:14.318 "hdgst": false, 00:22:14.318 "ddgst": false 00:22:14.318 }, 00:22:14.318 "method": "bdev_nvme_attach_controller" 00:22:14.318 }' 00:22:14.318 10:16:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:14.318 10:16:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:14.318 10:16:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:14.318 10:16:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:14.318 10:16:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:14.318 10:16:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:14.318 10:16:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:14.318 10:16:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:14.318 10:16:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:14.318 10:16:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:14.318 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:14.318 ... 00:22:14.318 fio-3.35 00:22:14.318 Starting 3 threads 00:22:20.920 00:22:20.920 filename0: (groupid=0, jobs=1): err= 0: pid=83386: Tue Nov 19 10:16:33 2024 00:22:20.920 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(163MiB/5005msec) 00:22:20.920 slat (nsec): min=7963, max=39427, avg=14249.49, stdev=3180.00 00:22:20.920 clat (usec): min=8504, max=14285, avg=11499.19, stdev=424.81 00:22:20.920 lat (usec): min=8518, max=14313, avg=11513.44, stdev=424.88 00:22:20.920 clat percentiles (usec): 00:22:20.920 | 1.00th=[11207], 5.00th=[11207], 10.00th=[11338], 20.00th=[11338], 00:22:20.920 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11469], 60.00th=[11469], 00:22:20.920 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11600], 95.00th=[11731], 00:22:20.920 | 99.00th=[14091], 99.50th=[14222], 99.90th=[14222], 99.95th=[14222], 00:22:20.920 | 99.99th=[14222] 00:22:20.920 bw ( KiB/s): min=32256, max=33792, per=33.30%, avg=33254.40, stdev=518.36, samples=10 00:22:20.920 iops : min= 252, max= 264, avg=259.80, stdev= 4.05, samples=10 00:22:20.920 lat (msec) : 10=0.23%, 20=99.77% 00:22:20.920 cpu : usr=91.07%, sys=8.37%, ctx=43, majf=0, minf=0 00:22:20.920 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:20.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.920 issued rwts: total=1302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:20.920 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:20.920 filename0: (groupid=0, jobs=1): err= 0: pid=83387: Tue Nov 19 10:16:33 2024 00:22:20.920 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(163MiB/5004msec) 00:22:20.920 slat (nsec): min=7353, max=40184, avg=14476.00, stdev=3095.82 00:22:20.920 clat (usec): min=8506, max=14282, avg=11497.56, stdev=423.39 00:22:20.920 lat (usec): min=8520, max=14308, avg=11512.03, stdev=423.53 00:22:20.920 clat percentiles (usec): 00:22:20.920 | 1.00th=[11207], 5.00th=[11207], 10.00th=[11338], 20.00th=[11338], 00:22:20.920 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11469], 60.00th=[11469], 00:22:20.920 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11600], 95.00th=[11731], 00:22:20.920 | 99.00th=[14091], 99.50th=[14222], 99.90th=[14222], 99.95th=[14222], 00:22:20.920 | 99.99th=[14222] 00:22:20.920 bw ( KiB/s): min=32256, max=33792, per=33.30%, avg=33254.40, stdev=518.36, samples=10 00:22:20.920 iops : min= 252, max= 264, avg=259.80, stdev= 4.05, samples=10 00:22:20.920 lat (msec) : 10=0.23%, 20=99.77% 00:22:20.920 cpu : usr=91.15%, sys=8.35%, ctx=6, majf=0, minf=0 00:22:20.920 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:20.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.920 issued rwts: total=1302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:20.920 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:20.920 filename0: (groupid=0, jobs=1): err= 0: pid=83388: Tue Nov 19 10:16:33 2024 00:22:20.920 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(163MiB/5006msec) 00:22:20.920 slat (nsec): min=7159, max=40986, avg=10866.62, stdev=4308.22 00:22:20.920 clat (usec): min=10822, max=14521, avg=11506.05, stdev=400.24 00:22:20.920 lat (usec): min=10830, max=14539, avg=11516.91, stdev=400.34 00:22:20.920 clat percentiles (usec): 00:22:20.920 | 1.00th=[11207], 5.00th=[11207], 10.00th=[11338], 20.00th=[11338], 00:22:20.920 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11469], 60.00th=[11469], 00:22:20.920 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11600], 95.00th=[11863], 00:22:20.920 | 99.00th=[14091], 99.50th=[14091], 99.90th=[14484], 99.95th=[14484], 00:22:20.920 | 99.99th=[14484] 00:22:20.920 bw ( KiB/s): min=32256, max=33792, per=33.30%, avg=33254.40, stdev=518.36, samples=10 00:22:20.920 iops : min= 252, max= 264, avg=259.80, stdev= 4.05, samples=10 00:22:20.920 lat (msec) : 20=100.00% 00:22:20.920 cpu : usr=90.93%, sys=8.41%, ctx=15, majf=0, minf=0 00:22:20.920 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:20.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.920 issued rwts: total=1302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:20.920 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:20.920 00:22:20.920 Run status group 0 (all jobs): 00:22:20.920 READ: bw=97.5MiB/s (102MB/s), 32.5MiB/s-32.5MiB/s (34.1MB/s-34.1MB/s), io=488MiB (512MB), run=5004-5006msec 00:22:20.920 10:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:22:20.920 10:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:20.921 10:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:20.921 10:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:20.921 bdev_null0 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:20.921 [2024-11-19 10:16:34.044689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:20.921 bdev_null1 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:20.921 bdev_null2 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:20.921 { 00:22:20.921 "params": { 00:22:20.921 "name": "Nvme$subsystem", 00:22:20.921 "trtype": "$TEST_TRANSPORT", 00:22:20.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.921 "adrfam": "ipv4", 00:22:20.921 "trsvcid": "$NVMF_PORT", 00:22:20.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.921 "hdgst": ${hdgst:-false}, 00:22:20.921 "ddgst": ${ddgst:-false} 00:22:20.921 }, 00:22:20.921 "method": "bdev_nvme_attach_controller" 00:22:20.921 } 00:22:20.921 EOF 00:22:20.921 )") 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:20.921 10:16:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:20.921 { 00:22:20.921 "params": { 00:22:20.921 "name": "Nvme$subsystem", 00:22:20.921 "trtype": "$TEST_TRANSPORT", 00:22:20.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.921 "adrfam": "ipv4", 00:22:20.922 "trsvcid": "$NVMF_PORT", 00:22:20.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.922 "hdgst": ${hdgst:-false}, 00:22:20.922 "ddgst": ${ddgst:-false} 00:22:20.922 }, 00:22:20.922 "method": "bdev_nvme_attach_controller" 00:22:20.922 } 00:22:20.922 EOF 00:22:20.922 )") 00:22:20.922 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:20.922 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:20.922 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:20.922 10:16:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:22:20.922 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:20.922 10:16:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:20.922 10:16:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:20.922 10:16:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:20.922 { 00:22:20.922 "params": { 00:22:20.922 "name": "Nvme$subsystem", 00:22:20.922 "trtype": "$TEST_TRANSPORT", 00:22:20.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.922 "adrfam": "ipv4", 00:22:20.922 "trsvcid": "$NVMF_PORT", 00:22:20.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.922 "hdgst": ${hdgst:-false}, 00:22:20.922 "ddgst": ${ddgst:-false} 00:22:20.922 }, 00:22:20.922 "method": "bdev_nvme_attach_controller" 00:22:20.922 } 00:22:20.922 EOF 00:22:20.922 )") 00:22:20.922 10:16:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:22:20.922 10:16:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:22:20.922 10:16:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:22:20.922 10:16:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:20.922 "params": { 00:22:20.922 "name": "Nvme0", 00:22:20.922 "trtype": "tcp", 00:22:20.922 "traddr": "10.0.0.3", 00:22:20.922 "adrfam": "ipv4", 00:22:20.922 "trsvcid": "4420", 00:22:20.922 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:20.922 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:20.922 "hdgst": false, 00:22:20.922 "ddgst": false 00:22:20.922 }, 00:22:20.922 "method": "bdev_nvme_attach_controller" 00:22:20.922 },{ 00:22:20.922 "params": { 00:22:20.922 "name": "Nvme1", 00:22:20.922 "trtype": "tcp", 00:22:20.922 "traddr": "10.0.0.3", 00:22:20.922 "adrfam": "ipv4", 00:22:20.922 "trsvcid": "4420", 00:22:20.922 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.922 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:20.922 "hdgst": false, 00:22:20.922 "ddgst": false 00:22:20.922 }, 00:22:20.922 "method": "bdev_nvme_attach_controller" 00:22:20.922 },{ 00:22:20.922 "params": { 00:22:20.922 "name": "Nvme2", 00:22:20.922 "trtype": "tcp", 00:22:20.922 "traddr": "10.0.0.3", 00:22:20.922 "adrfam": "ipv4", 00:22:20.922 "trsvcid": "4420", 00:22:20.922 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:20.922 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:20.922 "hdgst": false, 00:22:20.922 "ddgst": false 00:22:20.922 }, 00:22:20.922 "method": "bdev_nvme_attach_controller" 00:22:20.922 }' 00:22:20.922 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:20.922 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:20.922 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:20.922 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:20.922 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:20.922 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:20.922 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:20.922 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:20.922 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:20.922 10:16:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:20.922 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:20.922 ... 00:22:20.922 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:20.922 ... 00:22:20.922 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:20.922 ... 00:22:20.922 fio-3.35 00:22:20.922 Starting 24 threads 00:22:33.201 00:22:33.201 filename0: (groupid=0, jobs=1): err= 0: pid=83483: Tue Nov 19 10:16:45 2024 00:22:33.201 read: IOPS=227, BW=909KiB/s (930kB/s)(9120KiB/10038msec) 00:22:33.201 slat (usec): min=3, max=8037, avg=17.15, stdev=168.10 00:22:33.201 clat (msec): min=10, max=131, avg=70.27, stdev=19.63 00:22:33.201 lat (msec): min=10, max=131, avg=70.29, stdev=19.63 00:22:33.201 clat percentiles (msec): 00:22:33.201 | 1.00th=[ 20], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 50], 00:22:33.201 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 72], 00:22:33.201 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:22:33.201 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 122], 00:22:33.201 | 99.99th=[ 132] 00:22:33.201 bw ( KiB/s): min= 768, max= 1360, per=4.17%, avg=905.20, stdev=127.82, samples=20 00:22:33.201 iops : min= 192, max= 340, avg=226.30, stdev=31.96, samples=20 00:22:33.201 lat (msec) : 20=1.40%, 50=19.17%, 100=71.71%, 250=7.72% 00:22:33.201 cpu : usr=31.67%, sys=1.56%, ctx=882, majf=0, minf=9 00:22:33.201 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=81.9%, 16=16.4%, 32=0.0%, >=64=0.0% 00:22:33.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.201 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.201 issued rwts: total=2280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.201 filename0: (groupid=0, jobs=1): err= 0: pid=83484: Tue Nov 19 10:16:45 2024 00:22:33.201 read: IOPS=236, BW=946KiB/s (968kB/s)(9476KiB/10022msec) 00:22:33.201 slat (usec): min=4, max=8026, avg=23.15, stdev=246.88 00:22:33.201 clat (msec): min=23, max=119, avg=67.54, stdev=18.90 00:22:33.201 lat (msec): min=23, max=119, avg=67.56, stdev=18.91 00:22:33.201 clat percentiles (msec): 00:22:33.201 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 48], 00:22:33.201 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:22:33.201 | 70.00th=[ 75], 80.00th=[ 83], 90.00th=[ 93], 95.00th=[ 108], 00:22:33.201 | 99.00th=[ 117], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 121], 00:22:33.201 | 99.99th=[ 121] 00:22:33.201 bw ( KiB/s): min= 768, max= 1192, per=4.34%, avg=943.60, stdev=98.72, samples=20 00:22:33.201 iops : min= 192, max= 298, avg=235.90, stdev=24.68, samples=20 00:22:33.201 lat (msec) : 50=25.12%, 100=67.37%, 250=7.51% 00:22:33.201 cpu : usr=33.67%, sys=1.75%, ctx=972, majf=0, minf=9 00:22:33.201 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:22:33.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.201 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.201 issued rwts: total=2369,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.201 filename0: (groupid=0, jobs=1): err= 0: pid=83485: Tue Nov 19 10:16:45 2024 00:22:33.201 read: IOPS=240, BW=963KiB/s (986kB/s)(9632KiB/10001msec) 00:22:33.201 slat (usec): min=4, max=8030, avg=17.82, stdev=163.42 00:22:33.201 clat (usec): min=1201, max=121218, avg=66349.87, stdev=20452.26 00:22:33.201 lat (usec): min=1209, max=121244, avg=66367.70, stdev=20451.75 00:22:33.201 clat percentiles (msec): 00:22:33.201 | 1.00th=[ 5], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 48], 00:22:33.201 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 72], 00:22:33.201 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 90], 95.00th=[ 107], 00:22:33.201 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 122], 99.95th=[ 122], 00:22:33.201 | 99.99th=[ 122] 00:22:33.201 bw ( KiB/s): min= 792, max= 1072, per=4.31%, avg=936.00, stdev=67.62, samples=19 00:22:33.201 iops : min= 198, max= 268, avg=234.00, stdev=16.90, samples=19 00:22:33.201 lat (msec) : 2=0.37%, 4=0.42%, 10=1.20%, 20=0.29%, 50=26.54% 00:22:33.201 lat (msec) : 100=64.95%, 250=6.23% 00:22:33.202 cpu : usr=31.71%, sys=1.69%, ctx=879, majf=0, minf=9 00:22:33.202 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:22:33.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.202 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.202 issued rwts: total=2408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.202 filename0: (groupid=0, jobs=1): err= 0: pid=83486: Tue Nov 19 10:16:45 2024 00:22:33.202 read: IOPS=230, BW=924KiB/s (946kB/s)(9244KiB/10007msec) 00:22:33.202 slat (usec): min=3, max=8026, avg=31.74, stdev=372.29 00:22:33.202 clat (msec): min=8, max=158, avg=69.09, stdev=21.05 00:22:33.202 lat (msec): min=8, max=158, avg=69.12, stdev=21.08 00:22:33.202 clat percentiles (msec): 00:22:33.202 | 1.00th=[ 24], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 48], 00:22:33.202 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 72], 00:22:33.202 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 109], 00:22:33.202 | 99.00th=[ 123], 99.50th=[ 136], 99.90th=[ 157], 99.95th=[ 159], 00:22:33.202 | 99.99th=[ 159] 00:22:33.202 bw ( KiB/s): min= 528, max= 1024, per=4.19%, avg=911.47, stdev=132.38, samples=19 00:22:33.202 iops : min= 132, max= 256, avg=227.84, stdev=33.09, samples=19 00:22:33.202 lat (msec) : 10=0.26%, 20=0.39%, 50=26.01%, 100=64.52%, 250=8.83% 00:22:33.202 cpu : usr=31.53%, sys=1.82%, ctx=887, majf=0, minf=9 00:22:33.202 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=79.8%, 16=15.2%, 32=0.0%, >=64=0.0% 00:22:33.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.202 complete : 0=0.0%, 4=87.8%, 8=11.4%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.202 issued rwts: total=2311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.202 filename0: (groupid=0, jobs=1): err= 0: pid=83487: Tue Nov 19 10:16:45 2024 00:22:33.202 read: IOPS=238, BW=955KiB/s (977kB/s)(9552KiB/10007msec) 00:22:33.202 slat (usec): min=8, max=8029, avg=19.89, stdev=183.46 00:22:33.202 clat (msec): min=9, max=122, avg=66.92, stdev=19.30 00:22:33.202 lat (msec): min=9, max=122, avg=66.94, stdev=19.29 00:22:33.202 clat percentiles (msec): 00:22:33.202 | 1.00th=[ 27], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 48], 00:22:33.202 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:22:33.202 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 95], 95.00th=[ 107], 00:22:33.202 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 123], 00:22:33.202 | 99.99th=[ 123] 00:22:33.202 bw ( KiB/s): min= 768, max= 1144, per=4.35%, avg=944.42, stdev=98.32, samples=19 00:22:33.202 iops : min= 192, max= 286, avg=236.11, stdev=24.58, samples=19 00:22:33.202 lat (msec) : 10=0.25%, 20=0.17%, 50=26.55%, 100=66.37%, 250=6.66% 00:22:33.202 cpu : usr=34.46%, sys=2.06%, ctx=1055, majf=0, minf=9 00:22:33.202 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:22:33.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.202 complete : 0=0.0%, 4=86.7%, 8=13.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.202 issued rwts: total=2388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.202 filename0: (groupid=0, jobs=1): err= 0: pid=83488: Tue Nov 19 10:16:45 2024 00:22:33.202 read: IOPS=241, BW=965KiB/s (989kB/s)(9656KiB/10002msec) 00:22:33.202 slat (usec): min=4, max=8026, avg=19.86, stdev=182.37 00:22:33.202 clat (usec): min=1471, max=125889, avg=66173.53, stdev=20396.14 00:22:33.202 lat (usec): min=1479, max=125910, avg=66193.39, stdev=20393.20 00:22:33.202 clat percentiles (msec): 00:22:33.202 | 1.00th=[ 5], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 49], 00:22:33.202 | 30.00th=[ 53], 40.00th=[ 59], 50.00th=[ 69], 60.00th=[ 72], 00:22:33.202 | 70.00th=[ 77], 80.00th=[ 81], 90.00th=[ 93], 95.00th=[ 107], 00:22:33.202 | 99.00th=[ 118], 99.50th=[ 122], 99.90th=[ 127], 99.95th=[ 127], 00:22:33.202 | 99.99th=[ 127] 00:22:33.202 bw ( KiB/s): min= 816, max= 1072, per=4.33%, avg=941.05, stdev=65.33, samples=19 00:22:33.202 iops : min= 204, max= 268, avg=235.26, stdev=16.33, samples=19 00:22:33.202 lat (msec) : 2=0.12%, 4=0.37%, 10=1.20%, 20=0.29%, 50=22.33% 00:22:33.202 lat (msec) : 100=68.93%, 250=6.75% 00:22:33.202 cpu : usr=40.38%, sys=2.61%, ctx=1313, majf=0, minf=9 00:22:33.202 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.6%, 16=15.6%, 32=0.0%, >=64=0.0% 00:22:33.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.202 complete : 0=0.0%, 4=86.7%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.202 issued rwts: total=2414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.202 filename0: (groupid=0, jobs=1): err= 0: pid=83489: Tue Nov 19 10:16:45 2024 00:22:33.202 read: IOPS=230, BW=922KiB/s (945kB/s)(9272KiB/10052msec) 00:22:33.202 slat (usec): min=4, max=3053, avg=17.67, stdev=92.03 00:22:33.202 clat (msec): min=12, max=135, avg=69.20, stdev=21.43 00:22:33.202 lat (msec): min=12, max=135, avg=69.22, stdev=21.43 00:22:33.202 clat percentiles (msec): 00:22:33.202 | 1.00th=[ 13], 5.00th=[ 28], 10.00th=[ 46], 20.00th=[ 51], 00:22:33.202 | 30.00th=[ 57], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 75], 00:22:33.202 | 70.00th=[ 79], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 108], 00:22:33.202 | 99.00th=[ 115], 99.50th=[ 120], 99.90th=[ 128], 99.95th=[ 132], 00:22:33.202 | 99.99th=[ 136] 00:22:33.202 bw ( KiB/s): min= 744, max= 1552, per=4.23%, avg=920.45, stdev=167.33, samples=20 00:22:33.202 iops : min= 186, max= 388, avg=230.10, stdev=41.84, samples=20 00:22:33.202 lat (msec) : 20=3.36%, 50=15.01%, 100=73.64%, 250=7.98% 00:22:33.202 cpu : usr=44.54%, sys=2.58%, ctx=1336, majf=0, minf=9 00:22:33.202 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=80.7%, 16=16.1%, 32=0.0%, >=64=0.0% 00:22:33.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.202 complete : 0=0.0%, 4=88.0%, 8=11.4%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.202 issued rwts: total=2318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.202 filename0: (groupid=0, jobs=1): err= 0: pid=83490: Tue Nov 19 10:16:45 2024 00:22:33.202 read: IOPS=209, BW=837KiB/s (857kB/s)(8376KiB/10010msec) 00:22:33.202 slat (usec): min=4, max=4028, avg=19.64, stdev=141.58 00:22:33.202 clat (msec): min=31, max=151, avg=76.35, stdev=20.45 00:22:33.202 lat (msec): min=31, max=152, avg=76.36, stdev=20.45 00:22:33.202 clat percentiles (msec): 00:22:33.202 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 58], 00:22:33.202 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 78], 00:22:33.202 | 70.00th=[ 82], 80.00th=[ 92], 90.00th=[ 107], 95.00th=[ 113], 00:22:33.202 | 99.00th=[ 134], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 153], 00:22:33.202 | 99.99th=[ 153] 00:22:33.202 bw ( KiB/s): min= 638, max= 1024, per=3.83%, avg=833.85, stdev=116.38, samples=20 00:22:33.202 iops : min= 159, max= 256, avg=208.35, stdev=29.14, samples=20 00:22:33.202 lat (msec) : 50=10.32%, 100=75.88%, 250=13.80% 00:22:33.202 cpu : usr=40.97%, sys=2.41%, ctx=1352, majf=0, minf=9 00:22:33.202 IO depths : 1=0.1%, 2=3.2%, 4=13.2%, 8=69.4%, 16=14.0%, 32=0.0%, >=64=0.0% 00:22:33.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.202 complete : 0=0.0%, 4=90.7%, 8=6.4%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.202 issued rwts: total=2094,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.202 filename1: (groupid=0, jobs=1): err= 0: pid=83491: Tue Nov 19 10:16:45 2024 00:22:33.202 read: IOPS=203, BW=814KiB/s (833kB/s)(8168KiB/10035msec) 00:22:33.202 slat (usec): min=7, max=8022, avg=23.79, stdev=234.79 00:22:33.202 clat (msec): min=13, max=156, avg=78.39, stdev=21.90 00:22:33.202 lat (msec): min=13, max=156, avg=78.41, stdev=21.90 00:22:33.202 clat percentiles (msec): 00:22:33.202 | 1.00th=[ 22], 5.00th=[ 43], 10.00th=[ 52], 20.00th=[ 68], 00:22:33.202 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 75], 60.00th=[ 81], 00:22:33.202 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 117], 00:22:33.202 | 99.00th=[ 133], 99.50th=[ 133], 99.90th=[ 150], 99.95th=[ 157], 00:22:33.202 | 99.99th=[ 157] 00:22:33.203 bw ( KiB/s): min= 624, max= 1410, per=3.73%, avg=810.20, stdev=166.67, samples=20 00:22:33.203 iops : min= 156, max= 352, avg=202.50, stdev=41.60, samples=20 00:22:33.203 lat (msec) : 20=0.78%, 50=8.08%, 100=74.14%, 250=16.99% 00:22:33.203 cpu : usr=40.66%, sys=2.38%, ctx=1428, majf=0, minf=9 00:22:33.203 IO depths : 1=0.1%, 2=4.8%, 4=19.3%, 8=62.4%, 16=13.4%, 32=0.0%, >=64=0.0% 00:22:33.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.203 complete : 0=0.0%, 4=92.7%, 8=3.0%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.203 issued rwts: total=2042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.203 filename1: (groupid=0, jobs=1): err= 0: pid=83492: Tue Nov 19 10:16:45 2024 00:22:33.203 read: IOPS=228, BW=915KiB/s (936kB/s)(9168KiB/10025msec) 00:22:33.203 slat (usec): min=4, max=8032, avg=30.22, stdev=340.57 00:22:33.203 clat (msec): min=22, max=144, avg=69.80, stdev=20.65 00:22:33.203 lat (msec): min=22, max=144, avg=69.83, stdev=20.65 00:22:33.203 clat percentiles (msec): 00:22:33.203 | 1.00th=[ 29], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 48], 00:22:33.203 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:22:33.203 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 97], 95.00th=[ 108], 00:22:33.203 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 146], 00:22:33.203 | 99.99th=[ 146] 00:22:33.203 bw ( KiB/s): min= 640, max= 1142, per=4.20%, avg=912.30, stdev=119.99, samples=20 00:22:33.203 iops : min= 160, max= 285, avg=228.05, stdev=29.95, samples=20 00:22:33.203 lat (msec) : 50=23.65%, 100=67.71%, 250=8.64% 00:22:33.203 cpu : usr=31.65%, sys=1.60%, ctx=855, majf=0, minf=9 00:22:33.203 IO depths : 1=0.1%, 2=0.8%, 4=2.9%, 8=80.5%, 16=15.6%, 32=0.0%, >=64=0.0% 00:22:33.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.203 complete : 0=0.0%, 4=87.8%, 8=11.5%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.203 issued rwts: total=2292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.203 filename1: (groupid=0, jobs=1): err= 0: pid=83493: Tue Nov 19 10:16:45 2024 00:22:33.203 read: IOPS=217, BW=872KiB/s (893kB/s)(8724KiB/10005msec) 00:22:33.203 slat (usec): min=7, max=8042, avg=30.74, stdev=353.57 00:22:33.203 clat (msec): min=2, max=153, avg=73.24, stdev=22.63 00:22:33.203 lat (msec): min=2, max=153, avg=73.27, stdev=22.62 00:22:33.203 clat percentiles (msec): 00:22:33.203 | 1.00th=[ 5], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 54], 00:22:33.203 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 77], 00:22:33.203 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 105], 95.00th=[ 115], 00:22:33.203 | 99.00th=[ 138], 99.50th=[ 150], 99.90th=[ 150], 99.95th=[ 153], 00:22:33.203 | 99.99th=[ 153] 00:22:33.203 bw ( KiB/s): min= 640, max= 1024, per=3.89%, avg=844.21, stdev=117.07, samples=19 00:22:33.203 iops : min= 160, max= 256, avg=211.05, stdev=29.27, samples=19 00:22:33.203 lat (msec) : 4=0.14%, 10=1.60%, 20=0.14%, 50=13.80%, 100=71.30% 00:22:33.203 lat (msec) : 250=13.02% 00:22:33.203 cpu : usr=38.85%, sys=2.04%, ctx=1305, majf=0, minf=9 00:22:33.203 IO depths : 1=0.1%, 2=2.5%, 4=10.2%, 8=72.7%, 16=14.5%, 32=0.0%, >=64=0.0% 00:22:33.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.203 complete : 0=0.0%, 4=89.9%, 8=7.9%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.203 issued rwts: total=2181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.203 filename1: (groupid=0, jobs=1): err= 0: pid=83494: Tue Nov 19 10:16:45 2024 00:22:33.203 read: IOPS=234, BW=940KiB/s (962kB/s)(9428KiB/10035msec) 00:22:33.203 slat (usec): min=8, max=8027, avg=24.40, stdev=250.91 00:22:33.203 clat (msec): min=12, max=131, avg=67.98, stdev=19.76 00:22:33.203 lat (msec): min=12, max=131, avg=68.01, stdev=19.77 00:22:33.203 clat percentiles (msec): 00:22:33.203 | 1.00th=[ 24], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 48], 00:22:33.203 | 30.00th=[ 55], 40.00th=[ 62], 50.00th=[ 72], 60.00th=[ 72], 00:22:33.203 | 70.00th=[ 77], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 108], 00:22:33.203 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 127], 99.95th=[ 127], 00:22:33.203 | 99.99th=[ 132] 00:22:33.203 bw ( KiB/s): min= 768, max= 1344, per=4.31%, avg=936.40, stdev=122.09, samples=20 00:22:33.203 iops : min= 192, max= 336, avg=234.10, stdev=30.52, samples=20 00:22:33.203 lat (msec) : 20=0.13%, 50=25.07%, 100=67.16%, 250=7.64% 00:22:33.203 cpu : usr=35.24%, sys=1.80%, ctx=1113, majf=0, minf=10 00:22:33.203 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.3%, 16=16.0%, 32=0.0%, >=64=0.0% 00:22:33.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.203 complete : 0=0.0%, 4=87.1%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.203 issued rwts: total=2357,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.203 filename1: (groupid=0, jobs=1): err= 0: pid=83495: Tue Nov 19 10:16:45 2024 00:22:33.203 read: IOPS=230, BW=923KiB/s (945kB/s)(9276KiB/10051msec) 00:22:33.203 slat (usec): min=4, max=8024, avg=24.21, stdev=229.72 00:22:33.203 clat (msec): min=13, max=120, avg=69.15, stdev=20.68 00:22:33.203 lat (msec): min=13, max=120, avg=69.17, stdev=20.68 00:22:33.203 clat percentiles (msec): 00:22:33.203 | 1.00th=[ 16], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 51], 00:22:33.203 | 30.00th=[ 58], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 74], 00:22:33.203 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:22:33.203 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 122], 99.95th=[ 122], 00:22:33.203 | 99.99th=[ 122] 00:22:33.203 bw ( KiB/s): min= 768, max= 1520, per=4.23%, avg=920.85, stdev=157.49, samples=20 00:22:33.203 iops : min= 192, max= 380, avg=230.20, stdev=39.38, samples=20 00:22:33.203 lat (msec) : 20=1.47%, 50=19.19%, 100=71.02%, 250=8.32% 00:22:33.203 cpu : usr=40.22%, sys=2.33%, ctx=1431, majf=0, minf=9 00:22:33.203 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.7%, 16=16.3%, 32=0.0%, >=64=0.0% 00:22:33.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.203 complete : 0=0.0%, 4=87.7%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.203 issued rwts: total=2319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.203 filename1: (groupid=0, jobs=1): err= 0: pid=83496: Tue Nov 19 10:16:45 2024 00:22:33.203 read: IOPS=226, BW=906KiB/s (928kB/s)(9116KiB/10064msec) 00:22:33.203 slat (usec): min=4, max=4029, avg=16.59, stdev=118.93 00:22:33.203 clat (usec): min=1629, max=130006, avg=70474.34, stdev=24677.00 00:22:33.203 lat (usec): min=1638, max=130015, avg=70490.92, stdev=24675.11 00:22:33.203 clat percentiles (msec): 00:22:33.203 | 1.00th=[ 3], 5.00th=[ 18], 10.00th=[ 44], 20.00th=[ 52], 00:22:33.203 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:22:33.203 | 70.00th=[ 81], 80.00th=[ 87], 90.00th=[ 104], 95.00th=[ 109], 00:22:33.203 | 99.00th=[ 123], 99.50th=[ 130], 99.90th=[ 130], 99.95th=[ 130], 00:22:33.203 | 99.99th=[ 130] 00:22:33.203 bw ( KiB/s): min= 624, max= 2160, per=4.17%, avg=905.20, stdev=308.40, samples=20 00:22:33.203 iops : min= 156, max= 540, avg=226.30, stdev=77.10, samples=20 00:22:33.203 lat (msec) : 2=0.70%, 4=1.40%, 10=1.40%, 20=2.72%, 50=10.88% 00:22:33.203 lat (msec) : 100=71.13%, 250=11.76% 00:22:33.203 cpu : usr=44.34%, sys=2.83%, ctx=1246, majf=0, minf=0 00:22:33.203 IO depths : 1=0.2%, 2=2.3%, 4=8.6%, 8=73.8%, 16=15.1%, 32=0.0%, >=64=0.0% 00:22:33.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.203 complete : 0=0.0%, 4=89.7%, 8=8.4%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.203 issued rwts: total=2279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.203 filename1: (groupid=0, jobs=1): err= 0: pid=83497: Tue Nov 19 10:16:45 2024 00:22:33.203 read: IOPS=228, BW=914KiB/s (936kB/s)(9196KiB/10062msec) 00:22:33.203 slat (usec): min=4, max=8025, avg=32.22, stdev=382.42 00:22:33.203 clat (msec): min=13, max=132, avg=69.77, stdev=20.96 00:22:33.203 lat (msec): min=13, max=132, avg=69.81, stdev=20.97 00:22:33.203 clat percentiles (msec): 00:22:33.203 | 1.00th=[ 16], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 50], 00:22:33.203 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 72], 00:22:33.203 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:22:33.204 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 131], 99.95th=[ 132], 00:22:33.204 | 99.99th=[ 133] 00:22:33.204 bw ( KiB/s): min= 696, max= 1600, per=4.20%, avg=912.85, stdev=181.89, samples=20 00:22:33.204 iops : min= 174, max= 400, avg=228.20, stdev=45.48, samples=20 00:22:33.204 lat (msec) : 20=2.70%, 50=18.83%, 100=70.29%, 250=8.18% 00:22:33.204 cpu : usr=31.17%, sys=2.15%, ctx=877, majf=0, minf=9 00:22:33.204 IO depths : 1=0.1%, 2=0.4%, 4=1.3%, 8=81.7%, 16=16.4%, 32=0.0%, >=64=0.0% 00:22:33.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.204 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.204 issued rwts: total=2299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.204 filename1: (groupid=0, jobs=1): err= 0: pid=83498: Tue Nov 19 10:16:45 2024 00:22:33.204 read: IOPS=229, BW=919KiB/s (941kB/s)(9232KiB/10046msec) 00:22:33.204 slat (usec): min=4, max=8024, avg=25.83, stdev=279.45 00:22:33.204 clat (msec): min=12, max=132, avg=69.46, stdev=19.85 00:22:33.204 lat (msec): min=12, max=132, avg=69.49, stdev=19.86 00:22:33.204 clat percentiles (msec): 00:22:33.204 | 1.00th=[ 26], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 49], 00:22:33.204 | 30.00th=[ 59], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 72], 00:22:33.204 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:22:33.204 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:22:33.204 | 99.99th=[ 132] 00:22:33.204 bw ( KiB/s): min= 761, max= 1351, per=4.23%, avg=918.10, stdev=125.40, samples=20 00:22:33.204 iops : min= 190, max= 337, avg=229.45, stdev=31.23, samples=20 00:22:33.204 lat (msec) : 20=0.09%, 50=24.57%, 100=67.29%, 250=8.06% 00:22:33.204 cpu : usr=31.57%, sys=1.74%, ctx=869, majf=0, minf=9 00:22:33.204 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.5%, 16=16.2%, 32=0.0%, >=64=0.0% 00:22:33.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.204 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.204 issued rwts: total=2308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.204 filename2: (groupid=0, jobs=1): err= 0: pid=83499: Tue Nov 19 10:16:45 2024 00:22:33.204 read: IOPS=234, BW=938KiB/s (961kB/s)(9420KiB/10039msec) 00:22:33.204 slat (usec): min=4, max=8022, avg=22.43, stdev=206.40 00:22:33.204 clat (msec): min=12, max=121, avg=68.05, stdev=19.60 00:22:33.204 lat (msec): min=12, max=121, avg=68.07, stdev=19.60 00:22:33.204 clat percentiles (msec): 00:22:33.204 | 1.00th=[ 24], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 48], 00:22:33.204 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:22:33.204 | 70.00th=[ 75], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 107], 00:22:33.204 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 123], 00:22:33.204 | 99.99th=[ 123] 00:22:33.204 bw ( KiB/s): min= 744, max= 1277, per=4.30%, avg=935.05, stdev=112.38, samples=20 00:22:33.204 iops : min= 186, max= 319, avg=233.75, stdev=28.06, samples=20 00:22:33.204 lat (msec) : 20=0.04%, 50=25.10%, 100=67.01%, 250=7.86% 00:22:33.204 cpu : usr=34.49%, sys=2.22%, ctx=1095, majf=0, minf=9 00:22:33.204 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.1%, 16=16.0%, 32=0.0%, >=64=0.0% 00:22:33.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.204 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.204 issued rwts: total=2355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.204 filename2: (groupid=0, jobs=1): err= 0: pid=83500: Tue Nov 19 10:16:45 2024 00:22:33.204 read: IOPS=226, BW=905KiB/s (927kB/s)(9068KiB/10019msec) 00:22:33.204 slat (usec): min=5, max=8030, avg=28.46, stdev=274.00 00:22:33.204 clat (msec): min=19, max=150, avg=70.56, stdev=19.96 00:22:33.204 lat (msec): min=19, max=150, avg=70.59, stdev=19.95 00:22:33.204 clat percentiles (msec): 00:22:33.204 | 1.00th=[ 32], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 51], 00:22:33.204 | 30.00th=[ 57], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 74], 00:22:33.204 | 70.00th=[ 80], 80.00th=[ 83], 90.00th=[ 104], 95.00th=[ 108], 00:22:33.204 | 99.00th=[ 118], 99.50th=[ 136], 99.90th=[ 136], 99.95th=[ 150], 00:22:33.204 | 99.99th=[ 150] 00:22:33.204 bw ( KiB/s): min= 638, max= 1024, per=4.14%, avg=900.60, stdev=102.60, samples=20 00:22:33.204 iops : min= 159, max= 256, avg=225.05, stdev=25.75, samples=20 00:22:33.204 lat (msec) : 20=0.13%, 50=19.06%, 100=70.36%, 250=10.45% 00:22:33.204 cpu : usr=38.66%, sys=2.44%, ctx=1179, majf=0, minf=9 00:22:33.204 IO depths : 1=0.1%, 2=1.2%, 4=4.5%, 8=79.0%, 16=15.2%, 32=0.0%, >=64=0.0% 00:22:33.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.204 complete : 0=0.0%, 4=88.1%, 8=10.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.204 issued rwts: total=2267,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.204 filename2: (groupid=0, jobs=1): err= 0: pid=83501: Tue Nov 19 10:16:45 2024 00:22:33.204 read: IOPS=228, BW=914KiB/s (935kB/s)(9160KiB/10027msec) 00:22:33.204 slat (usec): min=8, max=8038, avg=27.33, stdev=301.87 00:22:33.204 clat (msec): min=31, max=144, avg=69.84, stdev=19.40 00:22:33.204 lat (msec): min=31, max=144, avg=69.87, stdev=19.39 00:22:33.204 clat percentiles (msec): 00:22:33.204 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 50], 00:22:33.204 | 30.00th=[ 57], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 73], 00:22:33.204 | 70.00th=[ 79], 80.00th=[ 83], 90.00th=[ 97], 95.00th=[ 108], 00:22:33.204 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 132], 99.95th=[ 144], 00:22:33.204 | 99.99th=[ 144] 00:22:33.204 bw ( KiB/s): min= 656, max= 1104, per=4.20%, avg=912.40, stdev=102.53, samples=20 00:22:33.204 iops : min= 164, max= 276, avg=228.10, stdev=25.63, samples=20 00:22:33.204 lat (msec) : 50=21.83%, 100=70.09%, 250=8.08% 00:22:33.204 cpu : usr=37.13%, sys=2.43%, ctx=1083, majf=0, minf=9 00:22:33.204 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:22:33.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.204 complete : 0=0.0%, 4=87.5%, 8=12.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.204 issued rwts: total=2290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.204 filename2: (groupid=0, jobs=1): err= 0: pid=83502: Tue Nov 19 10:16:45 2024 00:22:33.204 read: IOPS=199, BW=796KiB/s (816kB/s)(8008KiB/10055msec) 00:22:33.204 slat (usec): min=4, max=8026, avg=24.98, stdev=262.87 00:22:33.204 clat (msec): min=12, max=151, avg=80.11, stdev=22.87 00:22:33.204 lat (msec): min=12, max=151, avg=80.13, stdev=22.87 00:22:33.204 clat percentiles (msec): 00:22:33.204 | 1.00th=[ 15], 5.00th=[ 47], 10.00th=[ 52], 20.00th=[ 69], 00:22:33.204 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 79], 60.00th=[ 84], 00:22:33.204 | 70.00th=[ 92], 80.00th=[ 100], 90.00th=[ 110], 95.00th=[ 118], 00:22:33.204 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 153], 99.95th=[ 153], 00:22:33.204 | 99.99th=[ 153] 00:22:33.204 bw ( KiB/s): min= 634, max= 1410, per=3.65%, avg=794.20, stdev=173.57, samples=20 00:22:33.204 iops : min= 158, max= 352, avg=198.50, stdev=43.32, samples=20 00:22:33.204 lat (msec) : 20=2.40%, 50=7.14%, 100=72.88%, 250=17.58% 00:22:33.204 cpu : usr=38.47%, sys=2.38%, ctx=1392, majf=0, minf=9 00:22:33.204 IO depths : 1=0.2%, 2=4.6%, 4=17.7%, 8=63.9%, 16=13.6%, 32=0.0%, >=64=0.0% 00:22:33.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.204 complete : 0=0.0%, 4=92.4%, 8=3.7%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.204 issued rwts: total=2002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.204 filename2: (groupid=0, jobs=1): err= 0: pid=83503: Tue Nov 19 10:16:45 2024 00:22:33.204 read: IOPS=228, BW=913KiB/s (935kB/s)(9192KiB/10063msec) 00:22:33.204 slat (usec): min=4, max=8037, avg=22.84, stdev=250.84 00:22:33.204 clat (msec): min=6, max=156, avg=69.87, stdev=23.59 00:22:33.204 lat (msec): min=6, max=156, avg=69.89, stdev=23.59 00:22:33.204 clat percentiles (msec): 00:22:33.204 | 1.00th=[ 9], 5.00th=[ 27], 10.00th=[ 46], 20.00th=[ 51], 00:22:33.204 | 30.00th=[ 58], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 75], 00:22:33.204 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 101], 95.00th=[ 111], 00:22:33.204 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 157], 00:22:33.204 | 99.99th=[ 157] 00:22:33.204 bw ( KiB/s): min= 512, max= 1680, per=4.20%, avg=912.80, stdev=209.06, samples=20 00:22:33.205 iops : min= 128, max= 420, avg=228.20, stdev=52.27, samples=20 00:22:33.205 lat (msec) : 10=1.39%, 20=2.18%, 50=16.14%, 100=70.58%, 250=9.70% 00:22:33.205 cpu : usr=44.14%, sys=2.47%, ctx=1378, majf=0, minf=9 00:22:33.205 IO depths : 1=0.1%, 2=1.3%, 4=5.2%, 8=78.0%, 16=15.4%, 32=0.0%, >=64=0.0% 00:22:33.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.205 complete : 0=0.0%, 4=88.6%, 8=10.2%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.205 issued rwts: total=2298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.205 filename2: (groupid=0, jobs=1): err= 0: pid=83504: Tue Nov 19 10:16:45 2024 00:22:33.205 read: IOPS=225, BW=904KiB/s (926kB/s)(9068KiB/10033msec) 00:22:33.205 slat (usec): min=4, max=10021, avg=32.65, stdev=378.25 00:22:33.205 clat (msec): min=17, max=131, avg=70.61, stdev=18.68 00:22:33.205 lat (msec): min=17, max=131, avg=70.64, stdev=18.69 00:22:33.205 clat percentiles (msec): 00:22:33.205 | 1.00th=[ 32], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 51], 00:22:33.205 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 72], 00:22:33.205 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:22:33.205 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 128], 99.95th=[ 132], 00:22:33.205 | 99.99th=[ 132] 00:22:33.205 bw ( KiB/s): min= 688, max= 1168, per=4.14%, avg=900.40, stdev=105.54, samples=20 00:22:33.205 iops : min= 172, max= 292, avg=225.10, stdev=26.39, samples=20 00:22:33.205 lat (msec) : 20=0.09%, 50=19.36%, 100=72.39%, 250=8.16% 00:22:33.205 cpu : usr=33.41%, sys=1.93%, ctx=1025, majf=0, minf=9 00:22:33.205 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.5%, 16=16.4%, 32=0.0%, >=64=0.0% 00:22:33.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.205 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.205 issued rwts: total=2267,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.205 filename2: (groupid=0, jobs=1): err= 0: pid=83505: Tue Nov 19 10:16:45 2024 00:22:33.205 read: IOPS=227, BW=912KiB/s (934kB/s)(9160KiB/10045msec) 00:22:33.205 slat (usec): min=5, max=8022, avg=17.96, stdev=167.41 00:22:33.205 clat (msec): min=12, max=124, avg=70.02, stdev=20.04 00:22:33.205 lat (msec): min=12, max=124, avg=70.04, stdev=20.04 00:22:33.205 clat percentiles (msec): 00:22:33.205 | 1.00th=[ 20], 5.00th=[ 38], 10.00th=[ 47], 20.00th=[ 51], 00:22:33.205 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:22:33.205 | 70.00th=[ 80], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 108], 00:22:33.205 | 99.00th=[ 120], 99.50th=[ 120], 99.90th=[ 126], 99.95th=[ 126], 00:22:33.205 | 99.99th=[ 126] 00:22:33.205 bw ( KiB/s): min= 768, max= 1383, per=4.19%, avg=910.90, stdev=128.52, samples=20 00:22:33.205 iops : min= 192, max= 345, avg=227.65, stdev=31.99, samples=20 00:22:33.205 lat (msec) : 20=1.31%, 50=18.25%, 100=72.01%, 250=8.43% 00:22:33.205 cpu : usr=34.16%, sys=2.11%, ctx=1100, majf=0, minf=9 00:22:33.205 IO depths : 1=0.1%, 2=0.5%, 4=1.7%, 8=81.4%, 16=16.2%, 32=0.0%, >=64=0.0% 00:22:33.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.205 complete : 0=0.0%, 4=87.8%, 8=11.8%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.205 issued rwts: total=2290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.205 filename2: (groupid=0, jobs=1): err= 0: pid=83506: Tue Nov 19 10:16:45 2024 00:22:33.205 read: IOPS=222, BW=889KiB/s (910kB/s)(8920KiB/10036msec) 00:22:33.205 slat (usec): min=7, max=4027, avg=21.99, stdev=163.97 00:22:33.205 clat (msec): min=28, max=156, avg=71.80, stdev=20.05 00:22:33.205 lat (msec): min=28, max=156, avg=71.82, stdev=20.04 00:22:33.205 clat percentiles (msec): 00:22:33.205 | 1.00th=[ 31], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 53], 00:22:33.205 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:22:33.205 | 70.00th=[ 80], 80.00th=[ 86], 90.00th=[ 101], 95.00th=[ 109], 00:22:33.205 | 99.00th=[ 124], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 157], 00:22:33.205 | 99.99th=[ 157] 00:22:33.205 bw ( KiB/s): min= 640, max= 1152, per=4.09%, avg=888.20, stdev=130.23, samples=20 00:22:33.205 iops : min= 160, max= 288, avg=222.05, stdev=32.56, samples=20 00:22:33.205 lat (msec) : 50=17.80%, 100=72.11%, 250=10.09% 00:22:33.205 cpu : usr=41.92%, sys=2.58%, ctx=1322, majf=0, minf=9 00:22:33.205 IO depths : 1=0.1%, 2=1.7%, 4=6.5%, 8=76.5%, 16=15.2%, 32=0.0%, >=64=0.0% 00:22:33.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.205 complete : 0=0.0%, 4=88.9%, 8=9.7%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.205 issued rwts: total=2230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:33.205 00:22:33.205 Run status group 0 (all jobs): 00:22:33.205 READ: bw=21.2MiB/s (22.2MB/s), 796KiB/s-965KiB/s (816kB/s-989kB/s), io=214MiB (224MB), run=10001-10064msec 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:33.205 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:33.206 bdev_null0 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:33.206 [2024-11-19 10:16:45.463368] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:33.206 bdev_null1 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.206 { 00:22:33.206 "params": { 00:22:33.206 "name": "Nvme$subsystem", 00:22:33.206 "trtype": "$TEST_TRANSPORT", 00:22:33.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.206 "adrfam": "ipv4", 00:22:33.206 "trsvcid": "$NVMF_PORT", 00:22:33.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.206 "hdgst": ${hdgst:-false}, 00:22:33.206 "ddgst": ${ddgst:-false} 00:22:33.206 }, 00:22:33.206 "method": "bdev_nvme_attach_controller" 00:22:33.206 } 00:22:33.206 EOF 00:22:33.206 )") 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.206 { 00:22:33.206 "params": { 00:22:33.206 "name": "Nvme$subsystem", 00:22:33.206 "trtype": "$TEST_TRANSPORT", 00:22:33.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.206 "adrfam": "ipv4", 00:22:33.206 "trsvcid": "$NVMF_PORT", 00:22:33.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.206 "hdgst": ${hdgst:-false}, 00:22:33.206 "ddgst": ${ddgst:-false} 00:22:33.206 }, 00:22:33.206 "method": "bdev_nvme_attach_controller" 00:22:33.206 } 00:22:33.206 EOF 00:22:33.206 )") 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:33.206 "params": { 00:22:33.206 "name": "Nvme0", 00:22:33.206 "trtype": "tcp", 00:22:33.206 "traddr": "10.0.0.3", 00:22:33.206 "adrfam": "ipv4", 00:22:33.206 "trsvcid": "4420", 00:22:33.206 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:33.206 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:33.206 "hdgst": false, 00:22:33.206 "ddgst": false 00:22:33.206 }, 00:22:33.206 "method": "bdev_nvme_attach_controller" 00:22:33.206 },{ 00:22:33.206 "params": { 00:22:33.206 "name": "Nvme1", 00:22:33.206 "trtype": "tcp", 00:22:33.206 "traddr": "10.0.0.3", 00:22:33.206 "adrfam": "ipv4", 00:22:33.206 "trsvcid": "4420", 00:22:33.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:33.206 "hdgst": false, 00:22:33.206 "ddgst": false 00:22:33.206 }, 00:22:33.206 "method": "bdev_nvme_attach_controller" 00:22:33.206 }' 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:33.206 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:33.207 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:33.207 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:33.207 10:16:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:33.207 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:33.207 ... 00:22:33.207 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:33.207 ... 00:22:33.207 fio-3.35 00:22:33.207 Starting 4 threads 00:22:38.487 00:22:38.487 filename0: (groupid=0, jobs=1): err= 0: pid=83645: Tue Nov 19 10:16:51 2024 00:22:38.487 read: IOPS=2279, BW=17.8MiB/s (18.7MB/s)(89.1MiB/5003msec) 00:22:38.487 slat (usec): min=4, max=127, avg=11.53, stdev= 4.43 00:22:38.487 clat (usec): min=641, max=7105, avg=3474.72, stdev=1072.48 00:22:38.487 lat (usec): min=649, max=7126, avg=3486.25, stdev=1072.90 00:22:38.487 clat percentiles (usec): 00:22:38.487 | 1.00th=[ 1205], 5.00th=[ 1401], 10.00th=[ 1418], 20.00th=[ 2933], 00:22:38.487 | 30.00th=[ 3294], 40.00th=[ 3720], 50.00th=[ 3851], 60.00th=[ 3916], 00:22:38.487 | 70.00th=[ 4015], 80.00th=[ 4080], 90.00th=[ 4293], 95.00th=[ 4948], 00:22:38.487 | 99.00th=[ 5866], 99.50th=[ 5997], 99.90th=[ 6128], 99.95th=[ 6849], 00:22:38.487 | 99.99th=[ 7111] 00:22:38.487 bw ( KiB/s): min=14416, max=20768, per=28.11%, avg=18457.56, stdev=2238.47, samples=9 00:22:38.487 iops : min= 1802, max= 2596, avg=2307.11, stdev=279.78, samples=9 00:22:38.487 lat (usec) : 750=0.51%, 1000=0.07% 00:22:38.487 lat (msec) : 2=15.70%, 4=53.08%, 10=30.65% 00:22:38.487 cpu : usr=91.38%, sys=7.48%, ctx=59, majf=0, minf=0 00:22:38.487 IO depths : 1=0.1%, 2=7.4%, 4=60.6%, 8=31.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:38.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.487 complete : 0=0.0%, 4=97.2%, 8=2.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.487 issued rwts: total=11404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.487 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:38.487 filename0: (groupid=0, jobs=1): err= 0: pid=83646: Tue Nov 19 10:16:51 2024 00:22:38.487 read: IOPS=1974, BW=15.4MiB/s (16.2MB/s)(77.1MiB/5001msec) 00:22:38.487 slat (nsec): min=3936, max=51496, avg=15672.42, stdev=4160.05 00:22:38.487 clat (usec): min=1278, max=6266, avg=3995.65, stdev=529.00 00:22:38.487 lat (usec): min=1294, max=6281, avg=4011.32, stdev=528.85 00:22:38.487 clat percentiles (usec): 00:22:38.487 | 1.00th=[ 2024], 5.00th=[ 3064], 10.00th=[ 3326], 20.00th=[ 3818], 00:22:38.487 | 30.00th=[ 3884], 40.00th=[ 3949], 50.00th=[ 4146], 60.00th=[ 4178], 00:22:38.487 | 70.00th=[ 4228], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4752], 00:22:38.487 | 99.00th=[ 5211], 99.50th=[ 5276], 99.90th=[ 5604], 99.95th=[ 5604], 00:22:38.487 | 99.99th=[ 6259] 00:22:38.487 bw ( KiB/s): min=14976, max=17328, per=23.98%, avg=15745.78, stdev=807.70, samples=9 00:22:38.487 iops : min= 1872, max= 2166, avg=1968.22, stdev=100.96, samples=9 00:22:38.487 lat (msec) : 2=0.97%, 4=42.39%, 10=56.64% 00:22:38.487 cpu : usr=92.14%, sys=7.08%, ctx=8, majf=0, minf=0 00:22:38.487 IO depths : 1=0.1%, 2=18.8%, 4=54.6%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:38.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.487 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.487 issued rwts: total=9873,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.487 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:38.487 filename1: (groupid=0, jobs=1): err= 0: pid=83647: Tue Nov 19 10:16:51 2024 00:22:38.487 read: IOPS=1975, BW=15.4MiB/s (16.2MB/s)(77.2MiB/5002msec) 00:22:38.487 slat (nsec): min=3796, max=50778, avg=15367.18, stdev=3753.98 00:22:38.487 clat (usec): min=1234, max=6267, avg=3995.27, stdev=532.95 00:22:38.487 lat (usec): min=1244, max=6282, avg=4010.64, stdev=532.98 00:22:38.487 clat percentiles (usec): 00:22:38.487 | 1.00th=[ 1975], 5.00th=[ 3032], 10.00th=[ 3326], 20.00th=[ 3818], 00:22:38.487 | 30.00th=[ 3884], 40.00th=[ 3949], 50.00th=[ 4146], 60.00th=[ 4178], 00:22:38.487 | 70.00th=[ 4228], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4752], 00:22:38.487 | 99.00th=[ 5211], 99.50th=[ 5276], 99.90th=[ 5407], 99.95th=[ 5407], 00:22:38.487 | 99.99th=[ 6259] 00:22:38.487 bw ( KiB/s): min=14976, max=17328, per=23.98%, avg=15745.78, stdev=807.70, samples=9 00:22:38.487 iops : min= 1872, max= 2166, avg=1968.22, stdev=100.96, samples=9 00:22:38.487 lat (msec) : 2=1.03%, 4=42.38%, 10=56.58% 00:22:38.487 cpu : usr=92.12%, sys=7.10%, ctx=5, majf=0, minf=0 00:22:38.487 IO depths : 1=0.1%, 2=18.8%, 4=54.6%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:38.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.487 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.487 issued rwts: total=9881,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.487 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:38.487 filename1: (groupid=0, jobs=1): err= 0: pid=83648: Tue Nov 19 10:16:51 2024 00:22:38.487 read: IOPS=1979, BW=15.5MiB/s (16.2MB/s)(77.4MiB/5003msec) 00:22:38.487 slat (usec): min=3, max=110, avg=14.68, stdev= 4.19 00:22:38.487 clat (usec): min=981, max=7210, avg=3990.64, stdev=540.19 00:22:38.487 lat (usec): min=990, max=7225, avg=4005.32, stdev=540.65 00:22:38.487 clat percentiles (usec): 00:22:38.487 | 1.00th=[ 1942], 5.00th=[ 3032], 10.00th=[ 3326], 20.00th=[ 3818], 00:22:38.487 | 30.00th=[ 3884], 40.00th=[ 3949], 50.00th=[ 4146], 60.00th=[ 4178], 00:22:38.487 | 70.00th=[ 4228], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4752], 00:22:38.487 | 99.00th=[ 5211], 99.50th=[ 5276], 99.90th=[ 5932], 99.95th=[ 5997], 00:22:38.487 | 99.99th=[ 7242] 00:22:38.487 bw ( KiB/s): min=14976, max=17264, per=24.04%, avg=15783.11, stdev=852.03, samples=9 00:22:38.487 iops : min= 1872, max= 2158, avg=1972.89, stdev=106.50, samples=9 00:22:38.487 lat (usec) : 1000=0.03% 00:22:38.487 lat (msec) : 2=1.04%, 4=42.44%, 10=56.49% 00:22:38.487 cpu : usr=91.92%, sys=7.28%, ctx=23, majf=0, minf=0 00:22:38.487 IO depths : 1=0.1%, 2=18.6%, 4=54.7%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:38.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.487 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.487 issued rwts: total=9901,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.487 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:38.487 00:22:38.487 Run status group 0 (all jobs): 00:22:38.487 READ: bw=64.1MiB/s (67.2MB/s), 15.4MiB/s-17.8MiB/s (16.2MB/s-18.7MB/s), io=321MiB (336MB), run=5001-5003msec 00:22:38.487 10:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:22:38.487 10:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:38.487 10:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:38.487 10:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:38.487 10:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:38.487 10:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:38.487 10:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.487 10:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.487 10:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.487 10:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:38.487 10:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.487 10:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.487 10:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.487 10:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:38.487 10:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:38.487 10:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:22:38.487 10:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:38.487 10:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.488 10:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.488 10:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.488 10:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:38.488 10:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.488 10:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.488 10:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.488 ************************************ 00:22:38.488 END TEST fio_dif_rand_params 00:22:38.488 ************************************ 00:22:38.488 00:22:38.488 real 0m23.666s 00:22:38.488 user 2m3.459s 00:22:38.488 sys 0m8.751s 00:22:38.488 10:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:38.488 10:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.488 10:16:51 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:22:38.488 10:16:51 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:38.488 10:16:51 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:38.488 10:16:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:38.488 ************************************ 00:22:38.488 START TEST fio_dif_digest 00:22:38.488 ************************************ 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:38.488 bdev_null0 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:38.488 [2024-11-19 10:16:51.686768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.488 { 00:22:38.488 "params": { 00:22:38.488 "name": "Nvme$subsystem", 00:22:38.488 "trtype": "$TEST_TRANSPORT", 00:22:38.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.488 "adrfam": "ipv4", 00:22:38.488 "trsvcid": "$NVMF_PORT", 00:22:38.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.488 "hdgst": ${hdgst:-false}, 00:22:38.488 "ddgst": ${ddgst:-false} 00:22:38.488 }, 00:22:38.488 "method": "bdev_nvme_attach_controller" 00:22:38.488 } 00:22:38.488 EOF 00:22:38.488 )") 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:38.488 "params": { 00:22:38.488 "name": "Nvme0", 00:22:38.488 "trtype": "tcp", 00:22:38.488 "traddr": "10.0.0.3", 00:22:38.488 "adrfam": "ipv4", 00:22:38.488 "trsvcid": "4420", 00:22:38.488 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:38.488 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:38.488 "hdgst": true, 00:22:38.488 "ddgst": true 00:22:38.488 }, 00:22:38.488 "method": "bdev_nvme_attach_controller" 00:22:38.488 }' 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:38.488 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:38.489 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:38.489 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:38.489 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:38.489 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:38.489 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:38.489 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:38.489 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:38.489 10:16:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:38.489 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:38.489 ... 00:22:38.489 fio-3.35 00:22:38.489 Starting 3 threads 00:22:48.631 00:22:48.631 filename0: (groupid=0, jobs=1): err= 0: pid=83754: Tue Nov 19 10:17:02 2024 00:22:48.631 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(287MiB/10011msec) 00:22:48.631 slat (nsec): min=7118, max=45684, avg=11270.56, stdev=4593.52 00:22:48.631 clat (usec): min=5680, max=14810, avg=13074.25, stdev=389.89 00:22:48.631 lat (usec): min=5688, max=14826, avg=13085.52, stdev=390.01 00:22:48.631 clat percentiles (usec): 00:22:48.631 | 1.00th=[12256], 5.00th=[12518], 10.00th=[12780], 20.00th=[12911], 00:22:48.631 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13042], 60.00th=[13173], 00:22:48.631 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13435], 95.00th=[13435], 00:22:48.631 | 99.00th=[13566], 99.50th=[13829], 99.90th=[14746], 99.95th=[14746], 00:22:48.631 | 99.99th=[14746] 00:22:48.631 bw ( KiB/s): min=28416, max=29952, per=33.36%, avg=29299.20, stdev=375.83, samples=20 00:22:48.631 iops : min= 222, max= 234, avg=228.90, stdev= 2.94, samples=20 00:22:48.631 lat (msec) : 10=0.13%, 20=99.87% 00:22:48.631 cpu : usr=90.64%, sys=8.64%, ctx=23, majf=0, minf=0 00:22:48.631 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:48.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:48.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:48.631 issued rwts: total=2292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:48.631 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:48.631 filename0: (groupid=0, jobs=1): err= 0: pid=83755: Tue Nov 19 10:17:02 2024 00:22:48.631 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(286MiB/10005msec) 00:22:48.631 slat (nsec): min=7245, max=50838, avg=10067.55, stdev=3509.32 00:22:48.631 clat (usec): min=10117, max=16121, avg=13087.92, stdev=320.33 00:22:48.631 lat (usec): min=10125, max=16145, avg=13097.99, stdev=320.59 00:22:48.631 clat percentiles (usec): 00:22:48.631 | 1.00th=[12256], 5.00th=[12518], 10.00th=[12780], 20.00th=[12911], 00:22:48.631 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13173], 60.00th=[13173], 00:22:48.631 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13435], 95.00th=[13435], 00:22:48.631 | 99.00th=[13698], 99.50th=[13829], 99.90th=[16057], 99.95th=[16057], 00:22:48.631 | 99.99th=[16057] 00:22:48.631 bw ( KiB/s): min=28416, max=29952, per=33.31%, avg=29260.80, stdev=492.08, samples=20 00:22:48.631 iops : min= 222, max= 234, avg=228.60, stdev= 3.84, samples=20 00:22:48.631 lat (msec) : 20=100.00% 00:22:48.631 cpu : usr=89.52%, sys=9.93%, ctx=15, majf=0, minf=0 00:22:48.631 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:48.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:48.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:48.631 issued rwts: total=2289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:48.631 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:48.631 filename0: (groupid=0, jobs=1): err= 0: pid=83756: Tue Nov 19 10:17:02 2024 00:22:48.631 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(286MiB/10006msec) 00:22:48.631 slat (nsec): min=7207, max=43984, avg=10609.60, stdev=4050.89 00:22:48.631 clat (usec): min=11699, max=14900, avg=13086.75, stdev=294.17 00:22:48.631 lat (usec): min=11707, max=14922, avg=13097.36, stdev=294.41 00:22:48.631 clat percentiles (usec): 00:22:48.631 | 1.00th=[12256], 5.00th=[12518], 10.00th=[12780], 20.00th=[12911], 00:22:48.631 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13042], 60.00th=[13173], 00:22:48.631 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13435], 95.00th=[13435], 00:22:48.631 | 99.00th=[13566], 99.50th=[13960], 99.90th=[14877], 99.95th=[14877], 00:22:48.631 | 99.99th=[14877] 00:22:48.631 bw ( KiB/s): min=28416, max=29952, per=33.31%, avg=29260.80, stdev=492.08, samples=20 00:22:48.631 iops : min= 222, max= 234, avg=228.60, stdev= 3.84, samples=20 00:22:48.631 lat (msec) : 20=100.00% 00:22:48.631 cpu : usr=90.00%, sys=9.42%, ctx=10, majf=0, minf=0 00:22:48.631 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:48.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:48.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:48.631 issued rwts: total=2289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:48.631 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:48.631 00:22:48.631 Run status group 0 (all jobs): 00:22:48.631 READ: bw=85.8MiB/s (89.9MB/s), 28.6MiB/s-28.6MiB/s (30.0MB/s-30.0MB/s), io=859MiB (900MB), run=10005-10011msec 00:22:48.932 10:17:02 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:22:48.932 10:17:02 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:22:48.932 10:17:02 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:22:48.932 10:17:02 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:48.932 10:17:02 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:22:48.932 10:17:02 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:48.932 10:17:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.932 10:17:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:48.932 10:17:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.932 10:17:02 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:48.932 10:17:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.932 10:17:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:48.932 ************************************ 00:22:48.932 END TEST fio_dif_digest 00:22:48.932 ************************************ 00:22:48.932 10:17:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.932 00:22:48.932 real 0m11.075s 00:22:48.932 user 0m27.711s 00:22:48.932 sys 0m3.077s 00:22:48.932 10:17:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:48.932 10:17:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:48.932 10:17:02 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:48.932 10:17:02 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:22:48.932 10:17:02 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:48.932 10:17:02 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:22:48.932 10:17:02 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:48.932 10:17:02 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:22:48.932 10:17:02 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:48.932 10:17:02 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:48.932 rmmod nvme_tcp 00:22:49.191 rmmod nvme_fabrics 00:22:49.191 rmmod nvme_keyring 00:22:49.191 10:17:02 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:49.191 10:17:02 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:22:49.191 10:17:02 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:22:49.191 10:17:02 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 83009 ']' 00:22:49.191 10:17:02 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 83009 00:22:49.191 10:17:02 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 83009 ']' 00:22:49.191 10:17:02 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 83009 00:22:49.191 10:17:02 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:22:49.191 10:17:02 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.191 10:17:02 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83009 00:22:49.191 killing process with pid 83009 00:22:49.191 10:17:02 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:49.191 10:17:02 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:49.191 10:17:02 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83009' 00:22:49.191 10:17:02 nvmf_dif -- common/autotest_common.sh@973 -- # kill 83009 00:22:49.191 10:17:02 nvmf_dif -- common/autotest_common.sh@978 -- # wait 83009 00:22:49.450 10:17:03 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:22:49.450 10:17:03 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:49.708 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:49.708 Waiting for block devices as requested 00:22:49.708 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:49.967 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:49.967 10:17:03 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:49.967 10:17:03 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:49.967 10:17:03 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:22:49.967 10:17:03 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:22:49.967 10:17:03 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:22:49.967 10:17:03 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:49.968 10:17:03 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:49.968 10:17:03 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:49.968 10:17:03 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:49.968 10:17:03 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:49.968 10:17:03 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:49.968 10:17:03 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:49.968 10:17:03 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:49.968 10:17:03 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:49.968 10:17:03 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:49.968 10:17:03 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:49.968 10:17:03 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:49.968 10:17:03 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:49.968 10:17:03 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:50.227 10:17:03 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:50.227 10:17:03 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:50.227 10:17:03 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:50.227 10:17:03 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.227 10:17:03 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:50.227 10:17:03 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.227 10:17:03 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:22:50.227 ************************************ 00:22:50.227 END TEST nvmf_dif 00:22:50.227 ************************************ 00:22:50.227 00:22:50.227 real 0m59.871s 00:22:50.227 user 3m47.430s 00:22:50.227 sys 0m20.512s 00:22:50.227 10:17:03 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:50.227 10:17:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:50.227 10:17:03 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:50.227 10:17:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:50.227 10:17:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:50.227 10:17:03 -- common/autotest_common.sh@10 -- # set +x 00:22:50.227 ************************************ 00:22:50.227 START TEST nvmf_abort_qd_sizes 00:22:50.227 ************************************ 00:22:50.227 10:17:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:50.227 * Looking for test storage... 00:22:50.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:50.227 10:17:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:50.227 10:17:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:22:50.227 10:17:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:50.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.486 --rc genhtml_branch_coverage=1 00:22:50.486 --rc genhtml_function_coverage=1 00:22:50.486 --rc genhtml_legend=1 00:22:50.486 --rc geninfo_all_blocks=1 00:22:50.486 --rc geninfo_unexecuted_blocks=1 00:22:50.486 00:22:50.486 ' 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:50.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.486 --rc genhtml_branch_coverage=1 00:22:50.486 --rc genhtml_function_coverage=1 00:22:50.486 --rc genhtml_legend=1 00:22:50.486 --rc geninfo_all_blocks=1 00:22:50.486 --rc geninfo_unexecuted_blocks=1 00:22:50.486 00:22:50.486 ' 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:50.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.486 --rc genhtml_branch_coverage=1 00:22:50.486 --rc genhtml_function_coverage=1 00:22:50.486 --rc genhtml_legend=1 00:22:50.486 --rc geninfo_all_blocks=1 00:22:50.486 --rc geninfo_unexecuted_blocks=1 00:22:50.486 00:22:50.486 ' 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:50.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.486 --rc genhtml_branch_coverage=1 00:22:50.486 --rc genhtml_function_coverage=1 00:22:50.486 --rc genhtml_legend=1 00:22:50.486 --rc geninfo_all_blocks=1 00:22:50.486 --rc geninfo_unexecuted_blocks=1 00:22:50.486 00:22:50.486 ' 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:50.486 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.486 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:50.487 Cannot find device "nvmf_init_br" 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:50.487 Cannot find device "nvmf_init_br2" 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:50.487 Cannot find device "nvmf_tgt_br" 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:50.487 Cannot find device "nvmf_tgt_br2" 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:50.487 Cannot find device "nvmf_init_br" 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:50.487 Cannot find device "nvmf_init_br2" 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:50.487 Cannot find device "nvmf_tgt_br" 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:50.487 Cannot find device "nvmf_tgt_br2" 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:50.487 Cannot find device "nvmf_br" 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:50.487 Cannot find device "nvmf_init_if" 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:50.487 Cannot find device "nvmf_init_if2" 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:50.487 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:50.487 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:50.487 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:50.745 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:50.745 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:50.745 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:50.745 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:50.745 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:50.745 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:50.745 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:50.745 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:50.745 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:50.745 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:50.745 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:50.745 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:50.745 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:50.745 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:50.745 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:50.745 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:50.745 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:50.746 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:50.746 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:50.746 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:50.746 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:50.746 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:50.746 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:50.746 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:50.746 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:50.746 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:50.746 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:50.746 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:50.746 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:50.746 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:50.746 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:22:50.746 00:22:50.746 --- 10.0.0.3 ping statistics --- 00:22:50.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.746 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:22:50.746 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:50.746 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:50.746 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:22:50.746 00:22:50.746 --- 10.0.0.4 ping statistics --- 00:22:50.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.746 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:22:50.746 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:50.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:22:50.746 00:22:50.746 --- 10.0.0.1 ping statistics --- 00:22:50.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.746 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:22:50.746 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:50.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:22:50.746 00:22:50.746 --- 10.0.0.2 ping statistics --- 00:22:50.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.746 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:22:50.746 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.746 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:22:50.746 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:22:50.746 10:17:04 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:51.681 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:51.681 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:51.681 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:51.681 10:17:05 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.681 10:17:05 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:51.681 10:17:05 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:51.681 10:17:05 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.681 10:17:05 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:51.681 10:17:05 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:51.681 10:17:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:22:51.681 10:17:05 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:51.681 10:17:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.681 10:17:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:51.681 10:17:05 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84404 00:22:51.681 10:17:05 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84404 00:22:51.681 10:17:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84404 ']' 00:22:51.681 10:17:05 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:22:51.681 10:17:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.681 10:17:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.681 10:17:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.681 10:17:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.681 10:17:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:51.681 [2024-11-19 10:17:05.557788] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:22:51.681 [2024-11-19 10:17:05.557938] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.940 [2024-11-19 10:17:05.711289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:51.940 [2024-11-19 10:17:05.768277] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.940 [2024-11-19 10:17:05.768347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.940 [2024-11-19 10:17:05.768359] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.940 [2024-11-19 10:17:05.768367] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.940 [2024-11-19 10:17:05.768375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.940 [2024-11-19 10:17:05.769644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.940 [2024-11-19 10:17:05.769703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.940 [2024-11-19 10:17:05.769842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.940 [2024-11-19 10:17:05.769839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.940 [2024-11-19 10:17:05.824383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:52.199 10:17:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.199 10:17:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:22:52.199 10:17:05 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:52.199 10:17:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:52.199 10:17:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:52.199 10:17:05 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.199 10:17:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:22:52.199 10:17:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:22:52.199 10:17:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:22:52.199 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:22:52.199 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:22:52.199 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:22:52.199 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:22:52.199 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:22:52.199 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:22:52.199 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:22:52.199 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:22:52.199 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:52.200 10:17:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:52.200 ************************************ 00:22:52.200 START TEST spdk_target_abort 00:22:52.200 ************************************ 00:22:52.200 10:17:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:22:52.200 10:17:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:22:52.200 10:17:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:22:52.200 10:17:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.200 10:17:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:52.200 spdk_targetn1 00:22:52.200 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.200 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:52.200 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.200 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:52.200 [2024-11-19 10:17:06.051013] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.200 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.200 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:22:52.200 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.200 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:52.200 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.200 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:22:52.200 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.200 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:52.200 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.200 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:22:52.200 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.200 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:52.459 [2024-11-19 10:17:06.090112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:52.459 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.459 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:22:52.459 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:52.459 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:52.459 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:22:52.459 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:52.459 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:52.459 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:52.459 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:52.459 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:52.459 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:52.459 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:52.460 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:52.460 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:52.460 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:52.460 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:22:52.460 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:52.460 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:22:52.460 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:52.460 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:52.460 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:52.460 10:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:55.759 Initializing NVMe Controllers 00:22:55.759 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:22:55.759 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:55.759 Initialization complete. Launching workers. 00:22:55.759 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10443, failed: 0 00:22:55.759 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1047, failed to submit 9396 00:22:55.759 success 783, unsuccessful 264, failed 0 00:22:55.759 10:17:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:55.759 10:17:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:59.046 Initializing NVMe Controllers 00:22:59.046 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:22:59.046 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:59.046 Initialization complete. Launching workers. 00:22:59.046 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8984, failed: 0 00:22:59.046 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1164, failed to submit 7820 00:22:59.046 success 394, unsuccessful 770, failed 0 00:22:59.046 10:17:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:59.046 10:17:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:02.333 Initializing NVMe Controllers 00:23:02.333 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:02.333 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:02.333 Initialization complete. Launching workers. 00:23:02.333 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30887, failed: 0 00:23:02.333 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2392, failed to submit 28495 00:23:02.333 success 448, unsuccessful 1944, failed 0 00:23:02.333 10:17:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:23:02.333 10:17:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.333 10:17:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:02.333 10:17:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.333 10:17:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:23:02.333 10:17:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.333 10:17:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:02.902 10:17:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.902 10:17:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84404 00:23:02.902 10:17:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84404 ']' 00:23:02.902 10:17:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84404 00:23:02.902 10:17:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:23:02.902 10:17:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.902 10:17:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84404 00:23:02.902 10:17:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:02.902 10:17:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:02.902 killing process with pid 84404 00:23:02.902 10:17:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84404' 00:23:02.902 10:17:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84404 00:23:02.902 10:17:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84404 00:23:02.902 00:23:02.902 real 0m10.777s 00:23:02.902 user 0m41.070s 00:23:02.902 sys 0m2.438s 00:23:02.902 10:17:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:02.902 10:17:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:02.902 ************************************ 00:23:02.902 END TEST spdk_target_abort 00:23:02.902 ************************************ 00:23:03.162 10:17:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:23:03.162 10:17:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:03.162 10:17:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:03.162 10:17:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:03.162 ************************************ 00:23:03.162 START TEST kernel_target_abort 00:23:03.162 ************************************ 00:23:03.162 10:17:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:23:03.162 10:17:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:23:03.162 10:17:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:23:03.162 10:17:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:03.162 10:17:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:03.162 10:17:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.162 10:17:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.162 10:17:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:03.162 10:17:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.162 10:17:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:03.162 10:17:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:03.162 10:17:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:03.162 10:17:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:03.162 10:17:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:03.162 10:17:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:03.162 10:17:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:03.162 10:17:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:03.162 10:17:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:03.162 10:17:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:23:03.162 10:17:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:03.162 10:17:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:03.162 10:17:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:03.162 10:17:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:03.421 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:03.421 Waiting for block devices as requested 00:23:03.421 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:03.679 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:03.679 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:03.679 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:03.679 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:03.679 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:03.679 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:03.679 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:03.679 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:03.679 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:03.679 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:03.679 No valid GPT data, bailing 00:23:03.679 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:03.680 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:03.680 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:03.680 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:03.680 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:03.680 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:03.680 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:23:03.680 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:23:03.680 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:03.680 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:03.680 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:23:03.680 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:23:03.680 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:03.938 No valid GPT data, bailing 00:23:03.938 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:03.938 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:03.938 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:03.938 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:23:03.938 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:03.938 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:03.938 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:23:03.938 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:23:03.938 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:03.938 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:03.938 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:23:03.938 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:23:03.938 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:03.938 No valid GPT data, bailing 00:23:03.938 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:03.938 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:03.938 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:03.938 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:23:03.938 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:03.938 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:03.938 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:23:03.938 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:03.939 No valid GPT data, bailing 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a --hostid=6147973c-080a-4377-b1e7-85172bdc559a -a 10.0.0.1 -t tcp -s 4420 00:23:03.939 00:23:03.939 Discovery Log Number of Records 2, Generation counter 2 00:23:03.939 =====Discovery Log Entry 0====== 00:23:03.939 trtype: tcp 00:23:03.939 adrfam: ipv4 00:23:03.939 subtype: current discovery subsystem 00:23:03.939 treq: not specified, sq flow control disable supported 00:23:03.939 portid: 1 00:23:03.939 trsvcid: 4420 00:23:03.939 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:03.939 traddr: 10.0.0.1 00:23:03.939 eflags: none 00:23:03.939 sectype: none 00:23:03.939 =====Discovery Log Entry 1====== 00:23:03.939 trtype: tcp 00:23:03.939 adrfam: ipv4 00:23:03.939 subtype: nvme subsystem 00:23:03.939 treq: not specified, sq flow control disable supported 00:23:03.939 portid: 1 00:23:03.939 trsvcid: 4420 00:23:03.939 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:03.939 traddr: 10.0.0.1 00:23:03.939 eflags: none 00:23:03.939 sectype: none 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:03.939 10:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:07.243 Initializing NVMe Controllers 00:23:07.243 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:07.243 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:07.243 Initialization complete. Launching workers. 00:23:07.243 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33141, failed: 0 00:23:07.243 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33141, failed to submit 0 00:23:07.243 success 0, unsuccessful 33141, failed 0 00:23:07.243 10:17:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:07.243 10:17:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:10.532 Initializing NVMe Controllers 00:23:10.532 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:10.532 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:10.532 Initialization complete. Launching workers. 00:23:10.532 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68815, failed: 0 00:23:10.532 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29608, failed to submit 39207 00:23:10.532 success 0, unsuccessful 29608, failed 0 00:23:10.532 10:17:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:10.533 10:17:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:13.837 Initializing NVMe Controllers 00:23:13.837 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:13.837 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:13.837 Initialization complete. Launching workers. 00:23:13.837 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79136, failed: 0 00:23:13.837 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19788, failed to submit 59348 00:23:13.837 success 0, unsuccessful 19788, failed 0 00:23:13.837 10:17:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:23:13.837 10:17:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:13.837 10:17:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:23:13.837 10:17:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:13.837 10:17:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:13.837 10:17:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:13.837 10:17:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:13.837 10:17:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:23:13.837 10:17:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:23:13.837 10:17:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:14.405 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:16.310 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:16.310 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:16.310 ************************************ 00:23:16.310 END TEST kernel_target_abort 00:23:16.310 ************************************ 00:23:16.310 00:23:16.310 real 0m13.026s 00:23:16.310 user 0m6.286s 00:23:16.310 sys 0m4.144s 00:23:16.310 10:17:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:16.310 10:17:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:16.310 10:17:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:16.310 10:17:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:23:16.310 10:17:29 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:16.310 10:17:29 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:23:16.310 10:17:29 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:16.310 10:17:29 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:23:16.310 10:17:29 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:16.310 10:17:29 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:16.310 rmmod nvme_tcp 00:23:16.310 rmmod nvme_fabrics 00:23:16.310 rmmod nvme_keyring 00:23:16.310 10:17:29 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:16.310 10:17:29 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:23:16.310 10:17:29 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:23:16.310 10:17:29 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84404 ']' 00:23:16.310 10:17:29 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84404 00:23:16.310 10:17:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84404 ']' 00:23:16.310 10:17:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84404 00:23:16.310 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84404) - No such process 00:23:16.310 Process with pid 84404 is not found 00:23:16.310 10:17:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84404 is not found' 00:23:16.310 10:17:29 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:23:16.310 10:17:29 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:16.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:16.569 Waiting for block devices as requested 00:23:16.569 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:16.828 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:16.828 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:16.828 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:16.828 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:23:16.828 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:23:16.828 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:16.828 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:23:16.828 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:16.828 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:16.828 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:16.828 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:16.828 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:16.828 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:16.828 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:16.828 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:16.828 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:16.828 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:16.828 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:16.828 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:16.828 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:17.087 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:17.087 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:17.087 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:17.087 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.087 10:17:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:17.087 10:17:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.087 10:17:30 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:23:17.087 ************************************ 00:23:17.087 END TEST nvmf_abort_qd_sizes 00:23:17.087 ************************************ 00:23:17.087 00:23:17.087 real 0m26.811s 00:23:17.087 user 0m48.546s 00:23:17.087 sys 0m8.008s 00:23:17.087 10:17:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:17.087 10:17:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:17.087 10:17:30 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:17.087 10:17:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:17.087 10:17:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:17.087 10:17:30 -- common/autotest_common.sh@10 -- # set +x 00:23:17.087 ************************************ 00:23:17.087 START TEST keyring_file 00:23:17.087 ************************************ 00:23:17.087 10:17:30 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:17.087 * Looking for test storage... 00:23:17.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:23:17.087 10:17:30 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:17.087 10:17:30 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:17.087 10:17:30 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:23:17.346 10:17:31 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:17.346 10:17:31 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:17.346 10:17:31 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:17.346 10:17:31 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:17.346 10:17:31 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:23:17.346 10:17:31 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:23:17.346 10:17:31 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:23:17.346 10:17:31 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:23:17.346 10:17:31 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:23:17.346 10:17:31 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:23:17.346 10:17:31 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:23:17.346 10:17:31 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:17.346 10:17:31 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:23:17.347 10:17:31 keyring_file -- scripts/common.sh@345 -- # : 1 00:23:17.347 10:17:31 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:17.347 10:17:31 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:17.347 10:17:31 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:23:17.347 10:17:31 keyring_file -- scripts/common.sh@353 -- # local d=1 00:23:17.347 10:17:31 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:17.347 10:17:31 keyring_file -- scripts/common.sh@355 -- # echo 1 00:23:17.347 10:17:31 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:23:17.347 10:17:31 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:23:17.347 10:17:31 keyring_file -- scripts/common.sh@353 -- # local d=2 00:23:17.347 10:17:31 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:17.347 10:17:31 keyring_file -- scripts/common.sh@355 -- # echo 2 00:23:17.347 10:17:31 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:23:17.347 10:17:31 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:17.347 10:17:31 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:17.347 10:17:31 keyring_file -- scripts/common.sh@368 -- # return 0 00:23:17.347 10:17:31 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:17.347 10:17:31 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:17.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.347 --rc genhtml_branch_coverage=1 00:23:17.347 --rc genhtml_function_coverage=1 00:23:17.347 --rc genhtml_legend=1 00:23:17.347 --rc geninfo_all_blocks=1 00:23:17.347 --rc geninfo_unexecuted_blocks=1 00:23:17.347 00:23:17.347 ' 00:23:17.347 10:17:31 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:17.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.347 --rc genhtml_branch_coverage=1 00:23:17.347 --rc genhtml_function_coverage=1 00:23:17.347 --rc genhtml_legend=1 00:23:17.347 --rc geninfo_all_blocks=1 00:23:17.347 --rc geninfo_unexecuted_blocks=1 00:23:17.347 00:23:17.347 ' 00:23:17.347 10:17:31 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:17.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.347 --rc genhtml_branch_coverage=1 00:23:17.347 --rc genhtml_function_coverage=1 00:23:17.347 --rc genhtml_legend=1 00:23:17.347 --rc geninfo_all_blocks=1 00:23:17.347 --rc geninfo_unexecuted_blocks=1 00:23:17.347 00:23:17.347 ' 00:23:17.347 10:17:31 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:17.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.347 --rc genhtml_branch_coverage=1 00:23:17.347 --rc genhtml_function_coverage=1 00:23:17.347 --rc genhtml_legend=1 00:23:17.347 --rc geninfo_all_blocks=1 00:23:17.347 --rc geninfo_unexecuted_blocks=1 00:23:17.347 00:23:17.347 ' 00:23:17.347 10:17:31 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:23:17.347 10:17:31 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:17.347 10:17:31 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:23:17.347 10:17:31 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.347 10:17:31 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.347 10:17:31 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.347 10:17:31 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.347 10:17:31 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.347 10:17:31 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.347 10:17:31 keyring_file -- paths/export.sh@5 -- # export PATH 00:23:17.347 10:17:31 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@51 -- # : 0 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:17.347 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:17.347 10:17:31 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:17.347 10:17:31 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:17.347 10:17:31 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:17.347 10:17:31 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:17.347 10:17:31 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:23:17.347 10:17:31 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:23:17.348 10:17:31 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:23:17.348 10:17:31 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:17.348 10:17:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:17.348 10:17:31 keyring_file -- keyring/common.sh@17 -- # name=key0 00:23:17.348 10:17:31 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:17.348 10:17:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:17.348 10:17:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:17.348 10:17:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.UelIImuZZ3 00:23:17.348 10:17:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:17.348 10:17:31 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:17.348 10:17:31 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:23:17.348 10:17:31 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:17.348 10:17:31 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:17.348 10:17:31 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:23:17.348 10:17:31 keyring_file -- nvmf/common.sh@733 -- # python - 00:23:17.348 10:17:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.UelIImuZZ3 00:23:17.348 10:17:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.UelIImuZZ3 00:23:17.348 10:17:31 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.UelIImuZZ3 00:23:17.348 10:17:31 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:23:17.348 10:17:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:17.348 10:17:31 keyring_file -- keyring/common.sh@17 -- # name=key1 00:23:17.348 10:17:31 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:17.348 10:17:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:17.348 10:17:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:17.348 10:17:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.TX2wssVhbk 00:23:17.348 10:17:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:17.348 10:17:31 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:17.348 10:17:31 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:23:17.348 10:17:31 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:17.348 10:17:31 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:23:17.348 10:17:31 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:23:17.348 10:17:31 keyring_file -- nvmf/common.sh@733 -- # python - 00:23:17.348 10:17:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TX2wssVhbk 00:23:17.348 10:17:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.TX2wssVhbk 00:23:17.348 10:17:31 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.TX2wssVhbk 00:23:17.348 10:17:31 keyring_file -- keyring/file.sh@30 -- # tgtpid=85312 00:23:17.348 10:17:31 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:17.348 10:17:31 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85312 00:23:17.348 10:17:31 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85312 ']' 00:23:17.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.348 10:17:31 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.348 10:17:31 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.348 10:17:31 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.348 10:17:31 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.348 10:17:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:17.607 [2024-11-19 10:17:31.312715] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:23:17.607 [2024-11-19 10:17:31.313224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85312 ] 00:23:17.607 [2024-11-19 10:17:31.477081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.866 [2024-11-19 10:17:31.544489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.866 [2024-11-19 10:17:31.623458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:23:18.126 10:17:31 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:18.126 [2024-11-19 10:17:31.844302] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.126 null0 00:23:18.126 [2024-11-19 10:17:31.876190] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:18.126 [2024-11-19 10:17:31.876562] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.126 10:17:31 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:18.126 [2024-11-19 10:17:31.904199] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:23:18.126 request: 00:23:18.126 { 00:23:18.126 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:23:18.126 "secure_channel": false, 00:23:18.126 "listen_address": { 00:23:18.126 "trtype": "tcp", 00:23:18.126 "traddr": "127.0.0.1", 00:23:18.126 "trsvcid": "4420" 00:23:18.126 }, 00:23:18.126 "method": "nvmf_subsystem_add_listener", 00:23:18.126 "req_id": 1 00:23:18.126 } 00:23:18.126 Got JSON-RPC error response 00:23:18.126 response: 00:23:18.126 { 00:23:18.126 "code": -32602, 00:23:18.126 "message": "Invalid parameters" 00:23:18.126 } 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:18.126 10:17:31 keyring_file -- keyring/file.sh@47 -- # bperfpid=85322 00:23:18.126 10:17:31 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85322 /var/tmp/bperf.sock 00:23:18.126 10:17:31 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85322 ']' 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:18.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:18.126 10:17:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:18.126 [2024-11-19 10:17:31.974399] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:23:18.126 [2024-11-19 10:17:31.974715] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85322 ] 00:23:18.385 [2024-11-19 10:17:32.128217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.385 [2024-11-19 10:17:32.195462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.385 [2024-11-19 10:17:32.252267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:19.322 10:17:32 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:19.322 10:17:32 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:23:19.322 10:17:32 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UelIImuZZ3 00:23:19.322 10:17:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UelIImuZZ3 00:23:19.581 10:17:33 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.TX2wssVhbk 00:23:19.581 10:17:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.TX2wssVhbk 00:23:19.840 10:17:33 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:23:19.840 10:17:33 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:23:19.840 10:17:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:19.840 10:17:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:19.840 10:17:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:20.100 10:17:33 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.UelIImuZZ3 == \/\t\m\p\/\t\m\p\.\U\e\l\I\I\m\u\Z\Z\3 ]] 00:23:20.100 10:17:33 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:23:20.100 10:17:33 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:23:20.100 10:17:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:20.100 10:17:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:20.100 10:17:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:20.359 10:17:34 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.TX2wssVhbk == \/\t\m\p\/\t\m\p\.\T\X\2\w\s\s\V\h\b\k ]] 00:23:20.359 10:17:34 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:23:20.359 10:17:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:20.359 10:17:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:20.359 10:17:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:20.359 10:17:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:20.359 10:17:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:20.618 10:17:34 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:23:20.618 10:17:34 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:23:20.618 10:17:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:20.618 10:17:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:20.618 10:17:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:20.618 10:17:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:20.618 10:17:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:20.878 10:17:34 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:23:20.878 10:17:34 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:20.878 10:17:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:21.136 [2024-11-19 10:17:34.978796] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:21.395 nvme0n1 00:23:21.396 10:17:35 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:23:21.396 10:17:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:21.396 10:17:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:21.396 10:17:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:21.396 10:17:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:21.396 10:17:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:21.654 10:17:35 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:23:21.654 10:17:35 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:23:21.654 10:17:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:21.655 10:17:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:21.655 10:17:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:21.655 10:17:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:21.655 10:17:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:21.913 10:17:35 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:23:21.913 10:17:35 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:21.913 Running I/O for 1 seconds... 00:23:23.290 11402.00 IOPS, 44.54 MiB/s 00:23:23.290 Latency(us) 00:23:23.290 [2024-11-19T10:17:37.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.290 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:23:23.290 nvme0n1 : 1.01 11453.45 44.74 0.00 0.00 11144.35 4796.04 19422.49 00:23:23.290 [2024-11-19T10:17:37.179Z] =================================================================================================================== 00:23:23.290 [2024-11-19T10:17:37.179Z] Total : 11453.45 44.74 0.00 0.00 11144.35 4796.04 19422.49 00:23:23.290 { 00:23:23.290 "results": [ 00:23:23.290 { 00:23:23.290 "job": "nvme0n1", 00:23:23.290 "core_mask": "0x2", 00:23:23.290 "workload": "randrw", 00:23:23.290 "percentage": 50, 00:23:23.290 "status": "finished", 00:23:23.290 "queue_depth": 128, 00:23:23.290 "io_size": 4096, 00:23:23.290 "runtime": 1.006771, 00:23:23.290 "iops": 11453.448698860018, 00:23:23.290 "mibps": 44.74003397992195, 00:23:23.290 "io_failed": 0, 00:23:23.290 "io_timeout": 0, 00:23:23.290 "avg_latency_us": 11144.351823148667, 00:23:23.290 "min_latency_us": 4796.043636363636, 00:23:23.290 "max_latency_us": 19422.487272727274 00:23:23.290 } 00:23:23.290 ], 00:23:23.290 "core_count": 1 00:23:23.290 } 00:23:23.290 10:17:36 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:23.290 10:17:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:23.290 10:17:37 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:23:23.290 10:17:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:23.290 10:17:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:23.290 10:17:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:23.290 10:17:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:23.290 10:17:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:23.548 10:17:37 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:23:23.548 10:17:37 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:23:23.548 10:17:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:23.548 10:17:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:23.548 10:17:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:23.548 10:17:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:23.548 10:17:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:24.149 10:17:37 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:23:24.149 10:17:37 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:24.149 10:17:37 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:23:24.149 10:17:37 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:24.149 10:17:37 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:23:24.149 10:17:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:24.149 10:17:37 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:23:24.149 10:17:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:24.149 10:17:37 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:24.149 10:17:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:24.149 [2024-11-19 10:17:37.982626] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:24.149 [2024-11-19 10:17:37.982923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c770 (107): Transport endpoint is not connected 00:23:24.149 [2024-11-19 10:17:37.983913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c770 (9): Bad file descriptor 00:23:24.150 [2024-11-19 10:17:37.984911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:23:24.150 [2024-11-19 10:17:37.984941] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:24.150 [2024-11-19 10:17:37.984952] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:23:24.150 [2024-11-19 10:17:37.984964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:23:24.150 request: 00:23:24.150 { 00:23:24.150 "name": "nvme0", 00:23:24.150 "trtype": "tcp", 00:23:24.150 "traddr": "127.0.0.1", 00:23:24.150 "adrfam": "ipv4", 00:23:24.150 "trsvcid": "4420", 00:23:24.150 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:24.150 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:24.150 "prchk_reftag": false, 00:23:24.150 "prchk_guard": false, 00:23:24.150 "hdgst": false, 00:23:24.150 "ddgst": false, 00:23:24.150 "psk": "key1", 00:23:24.150 "allow_unrecognized_csi": false, 00:23:24.150 "method": "bdev_nvme_attach_controller", 00:23:24.150 "req_id": 1 00:23:24.150 } 00:23:24.150 Got JSON-RPC error response 00:23:24.150 response: 00:23:24.150 { 00:23:24.150 "code": -5, 00:23:24.150 "message": "Input/output error" 00:23:24.150 } 00:23:24.150 10:17:38 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:23:24.150 10:17:38 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:24.150 10:17:38 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:24.150 10:17:38 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:24.150 10:17:38 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:23:24.150 10:17:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:24.150 10:17:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:24.150 10:17:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:24.150 10:17:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:24.150 10:17:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:24.717 10:17:38 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:23:24.717 10:17:38 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:23:24.717 10:17:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:24.717 10:17:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:24.717 10:17:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:24.717 10:17:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:24.717 10:17:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:24.976 10:17:38 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:23:24.976 10:17:38 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:23:24.976 10:17:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:25.235 10:17:38 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:23:25.235 10:17:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:23:25.493 10:17:39 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:23:25.493 10:17:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:25.493 10:17:39 keyring_file -- keyring/file.sh@78 -- # jq length 00:23:25.752 10:17:39 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:23:25.752 10:17:39 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.UelIImuZZ3 00:23:25.752 10:17:39 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.UelIImuZZ3 00:23:25.752 10:17:39 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:23:25.752 10:17:39 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.UelIImuZZ3 00:23:25.752 10:17:39 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:23:25.752 10:17:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:25.752 10:17:39 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:23:25.752 10:17:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:25.752 10:17:39 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UelIImuZZ3 00:23:25.752 10:17:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UelIImuZZ3 00:23:26.011 [2024-11-19 10:17:39.834302] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.UelIImuZZ3': 0100660 00:23:26.011 [2024-11-19 10:17:39.834401] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:26.011 request: 00:23:26.011 { 00:23:26.011 "name": "key0", 00:23:26.011 "path": "/tmp/tmp.UelIImuZZ3", 00:23:26.011 "method": "keyring_file_add_key", 00:23:26.011 "req_id": 1 00:23:26.011 } 00:23:26.011 Got JSON-RPC error response 00:23:26.011 response: 00:23:26.011 { 00:23:26.011 "code": -1, 00:23:26.011 "message": "Operation not permitted" 00:23:26.011 } 00:23:26.011 10:17:39 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:23:26.011 10:17:39 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:26.011 10:17:39 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:26.011 10:17:39 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:26.011 10:17:39 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.UelIImuZZ3 00:23:26.011 10:17:39 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UelIImuZZ3 00:23:26.011 10:17:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UelIImuZZ3 00:23:26.270 10:17:40 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.UelIImuZZ3 00:23:26.270 10:17:40 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:23:26.270 10:17:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:26.270 10:17:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:26.270 10:17:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:26.270 10:17:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:26.270 10:17:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:26.841 10:17:40 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:23:26.841 10:17:40 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:26.841 10:17:40 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:23:26.841 10:17:40 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:26.841 10:17:40 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:23:26.841 10:17:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.841 10:17:40 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:23:26.841 10:17:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.841 10:17:40 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:26.841 10:17:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:27.100 [2024-11-19 10:17:40.730540] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.UelIImuZZ3': No such file or directory 00:23:27.100 [2024-11-19 10:17:40.730587] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:23:27.100 [2024-11-19 10:17:40.730642] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:23:27.100 [2024-11-19 10:17:40.730651] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:23:27.100 [2024-11-19 10:17:40.730660] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:27.100 [2024-11-19 10:17:40.730669] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:23:27.100 request: 00:23:27.100 { 00:23:27.100 "name": "nvme0", 00:23:27.100 "trtype": "tcp", 00:23:27.100 "traddr": "127.0.0.1", 00:23:27.100 "adrfam": "ipv4", 00:23:27.100 "trsvcid": "4420", 00:23:27.100 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:27.100 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:27.100 "prchk_reftag": false, 00:23:27.100 "prchk_guard": false, 00:23:27.100 "hdgst": false, 00:23:27.100 "ddgst": false, 00:23:27.100 "psk": "key0", 00:23:27.100 "allow_unrecognized_csi": false, 00:23:27.100 "method": "bdev_nvme_attach_controller", 00:23:27.100 "req_id": 1 00:23:27.100 } 00:23:27.100 Got JSON-RPC error response 00:23:27.100 response: 00:23:27.100 { 00:23:27.100 "code": -19, 00:23:27.100 "message": "No such device" 00:23:27.100 } 00:23:27.100 10:17:40 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:23:27.100 10:17:40 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:27.100 10:17:40 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:27.100 10:17:40 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:27.100 10:17:40 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:23:27.100 10:17:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:27.359 10:17:41 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:27.359 10:17:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:27.359 10:17:41 keyring_file -- keyring/common.sh@17 -- # name=key0 00:23:27.359 10:17:41 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:27.359 10:17:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:27.359 10:17:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:27.359 10:17:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QgrPKEQfyh 00:23:27.359 10:17:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:27.359 10:17:41 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:27.359 10:17:41 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:23:27.359 10:17:41 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:27.359 10:17:41 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:27.359 10:17:41 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:23:27.359 10:17:41 keyring_file -- nvmf/common.sh@733 -- # python - 00:23:27.359 10:17:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QgrPKEQfyh 00:23:27.359 10:17:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QgrPKEQfyh 00:23:27.359 10:17:41 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.QgrPKEQfyh 00:23:27.359 10:17:41 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QgrPKEQfyh 00:23:27.359 10:17:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QgrPKEQfyh 00:23:27.619 10:17:41 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:27.619 10:17:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:27.879 nvme0n1 00:23:28.137 10:17:41 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:23:28.137 10:17:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:28.137 10:17:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:28.137 10:17:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:28.137 10:17:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:28.137 10:17:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:28.396 10:17:42 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:23:28.396 10:17:42 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:23:28.396 10:17:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:28.668 10:17:42 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:23:28.668 10:17:42 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:23:28.668 10:17:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:28.668 10:17:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:28.668 10:17:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:28.926 10:17:42 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:23:28.926 10:17:42 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:23:28.926 10:17:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:28.926 10:17:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:28.926 10:17:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:28.926 10:17:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:28.926 10:17:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:29.185 10:17:42 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:23:29.185 10:17:42 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:29.185 10:17:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:29.444 10:17:43 keyring_file -- keyring/file.sh@105 -- # jq length 00:23:29.444 10:17:43 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:23:29.444 10:17:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:29.756 10:17:43 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:23:29.756 10:17:43 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QgrPKEQfyh 00:23:29.756 10:17:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QgrPKEQfyh 00:23:30.014 10:17:43 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.TX2wssVhbk 00:23:30.014 10:17:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.TX2wssVhbk 00:23:30.273 10:17:43 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:30.273 10:17:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:30.531 nvme0n1 00:23:30.531 10:17:44 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:23:30.531 10:17:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:23:30.790 10:17:44 keyring_file -- keyring/file.sh@113 -- # config='{ 00:23:30.790 "subsystems": [ 00:23:30.790 { 00:23:30.790 "subsystem": "keyring", 00:23:30.790 "config": [ 00:23:30.790 { 00:23:30.790 "method": "keyring_file_add_key", 00:23:30.790 "params": { 00:23:30.790 "name": "key0", 00:23:30.790 "path": "/tmp/tmp.QgrPKEQfyh" 00:23:30.790 } 00:23:30.790 }, 00:23:30.790 { 00:23:30.790 "method": "keyring_file_add_key", 00:23:30.790 "params": { 00:23:30.790 "name": "key1", 00:23:30.790 "path": "/tmp/tmp.TX2wssVhbk" 00:23:30.790 } 00:23:30.790 } 00:23:30.790 ] 00:23:30.790 }, 00:23:30.790 { 00:23:30.790 "subsystem": "iobuf", 00:23:30.790 "config": [ 00:23:30.790 { 00:23:30.790 "method": "iobuf_set_options", 00:23:30.790 "params": { 00:23:30.790 "small_pool_count": 8192, 00:23:30.790 "large_pool_count": 1024, 00:23:30.790 "small_bufsize": 8192, 00:23:30.790 "large_bufsize": 135168, 00:23:30.790 "enable_numa": false 00:23:30.790 } 00:23:30.790 } 00:23:30.790 ] 00:23:30.790 }, 00:23:30.790 { 00:23:30.790 "subsystem": "sock", 00:23:30.790 "config": [ 00:23:30.790 { 00:23:30.790 "method": "sock_set_default_impl", 00:23:30.790 "params": { 00:23:30.791 "impl_name": "uring" 00:23:30.791 } 00:23:30.791 }, 00:23:30.791 { 00:23:30.791 "method": "sock_impl_set_options", 00:23:30.791 "params": { 00:23:30.791 "impl_name": "ssl", 00:23:30.791 "recv_buf_size": 4096, 00:23:30.791 "send_buf_size": 4096, 00:23:30.791 "enable_recv_pipe": true, 00:23:30.791 "enable_quickack": false, 00:23:30.791 "enable_placement_id": 0, 00:23:30.791 "enable_zerocopy_send_server": true, 00:23:30.791 "enable_zerocopy_send_client": false, 00:23:30.791 "zerocopy_threshold": 0, 00:23:30.791 "tls_version": 0, 00:23:30.791 "enable_ktls": false 00:23:30.791 } 00:23:30.791 }, 00:23:30.791 { 00:23:30.791 "method": "sock_impl_set_options", 00:23:30.791 "params": { 00:23:30.791 "impl_name": "posix", 00:23:30.791 "recv_buf_size": 2097152, 00:23:30.791 "send_buf_size": 2097152, 00:23:30.791 "enable_recv_pipe": true, 00:23:30.791 "enable_quickack": false, 00:23:30.791 "enable_placement_id": 0, 00:23:30.791 "enable_zerocopy_send_server": true, 00:23:30.791 "enable_zerocopy_send_client": false, 00:23:30.791 "zerocopy_threshold": 0, 00:23:30.791 "tls_version": 0, 00:23:30.791 "enable_ktls": false 00:23:30.791 } 00:23:30.791 }, 00:23:30.791 { 00:23:30.791 "method": "sock_impl_set_options", 00:23:30.791 "params": { 00:23:30.791 "impl_name": "uring", 00:23:30.791 "recv_buf_size": 2097152, 00:23:30.791 "send_buf_size": 2097152, 00:23:30.791 "enable_recv_pipe": true, 00:23:30.791 "enable_quickack": false, 00:23:30.791 "enable_placement_id": 0, 00:23:30.791 "enable_zerocopy_send_server": false, 00:23:30.791 "enable_zerocopy_send_client": false, 00:23:30.791 "zerocopy_threshold": 0, 00:23:30.791 "tls_version": 0, 00:23:30.791 "enable_ktls": false 00:23:30.791 } 00:23:30.791 } 00:23:30.791 ] 00:23:30.791 }, 00:23:30.791 { 00:23:30.791 "subsystem": "vmd", 00:23:30.791 "config": [] 00:23:30.791 }, 00:23:30.791 { 00:23:30.791 "subsystem": "accel", 00:23:30.791 "config": [ 00:23:30.791 { 00:23:30.791 "method": "accel_set_options", 00:23:30.791 "params": { 00:23:30.791 "small_cache_size": 128, 00:23:30.791 "large_cache_size": 16, 00:23:30.791 "task_count": 2048, 00:23:30.791 "sequence_count": 2048, 00:23:30.791 "buf_count": 2048 00:23:30.791 } 00:23:30.791 } 00:23:30.791 ] 00:23:30.791 }, 00:23:30.791 { 00:23:30.791 "subsystem": "bdev", 00:23:30.791 "config": [ 00:23:30.791 { 00:23:30.791 "method": "bdev_set_options", 00:23:30.791 "params": { 00:23:30.791 "bdev_io_pool_size": 65535, 00:23:30.791 "bdev_io_cache_size": 256, 00:23:30.791 "bdev_auto_examine": true, 00:23:30.791 "iobuf_small_cache_size": 128, 00:23:30.791 "iobuf_large_cache_size": 16 00:23:30.791 } 00:23:30.791 }, 00:23:30.791 { 00:23:30.791 "method": "bdev_raid_set_options", 00:23:30.791 "params": { 00:23:30.791 "process_window_size_kb": 1024, 00:23:30.791 "process_max_bandwidth_mb_sec": 0 00:23:30.791 } 00:23:30.791 }, 00:23:30.791 { 00:23:30.791 "method": "bdev_iscsi_set_options", 00:23:30.791 "params": { 00:23:30.791 "timeout_sec": 30 00:23:30.791 } 00:23:30.791 }, 00:23:30.791 { 00:23:30.791 "method": "bdev_nvme_set_options", 00:23:30.791 "params": { 00:23:30.791 "action_on_timeout": "none", 00:23:30.791 "timeout_us": 0, 00:23:30.791 "timeout_admin_us": 0, 00:23:30.791 "keep_alive_timeout_ms": 10000, 00:23:30.791 "arbitration_burst": 0, 00:23:30.791 "low_priority_weight": 0, 00:23:30.791 "medium_priority_weight": 0, 00:23:30.791 "high_priority_weight": 0, 00:23:30.791 "nvme_adminq_poll_period_us": 10000, 00:23:30.791 "nvme_ioq_poll_period_us": 0, 00:23:30.791 "io_queue_requests": 512, 00:23:30.791 "delay_cmd_submit": true, 00:23:30.791 "transport_retry_count": 4, 00:23:30.791 "bdev_retry_count": 3, 00:23:30.791 "transport_ack_timeout": 0, 00:23:30.791 "ctrlr_loss_timeout_sec": 0, 00:23:30.791 "reconnect_delay_sec": 0, 00:23:30.791 "fast_io_fail_timeout_sec": 0, 00:23:30.791 "disable_auto_failback": false, 00:23:30.791 "generate_uuids": false, 00:23:30.791 "transport_tos": 0, 00:23:30.791 "nvme_error_stat": false, 00:23:30.791 "rdma_srq_size": 0, 00:23:30.791 "io_path_stat": false, 00:23:30.791 "allow_accel_sequence": false, 00:23:30.791 "rdma_max_cq_size": 0, 00:23:30.791 "rdma_cm_event_timeout_ms": 0, 00:23:30.791 "dhchap_digests": [ 00:23:30.791 "sha256", 00:23:30.791 "sha384", 00:23:30.791 "sha512" 00:23:30.791 ], 00:23:30.791 "dhchap_dhgroups": [ 00:23:30.791 "null", 00:23:30.791 "ffdhe2048", 00:23:30.791 "ffdhe3072", 00:23:30.791 "ffdhe4096", 00:23:30.791 "ffdhe6144", 00:23:30.791 "ffdhe8192" 00:23:30.791 ] 00:23:30.791 } 00:23:30.791 }, 00:23:30.791 { 00:23:30.791 "method": "bdev_nvme_attach_controller", 00:23:30.791 "params": { 00:23:30.791 "name": "nvme0", 00:23:30.791 "trtype": "TCP", 00:23:30.791 "adrfam": "IPv4", 00:23:30.791 "traddr": "127.0.0.1", 00:23:30.791 "trsvcid": "4420", 00:23:30.791 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:30.791 "prchk_reftag": false, 00:23:30.791 "prchk_guard": false, 00:23:30.791 "ctrlr_loss_timeout_sec": 0, 00:23:30.791 "reconnect_delay_sec": 0, 00:23:30.791 "fast_io_fail_timeout_sec": 0, 00:23:30.791 "psk": "key0", 00:23:30.791 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:30.791 "hdgst": false, 00:23:30.791 "ddgst": false, 00:23:30.791 "multipath": "multipath" 00:23:30.791 } 00:23:30.791 }, 00:23:30.791 { 00:23:30.791 "method": "bdev_nvme_set_hotplug", 00:23:30.791 "params": { 00:23:30.791 "period_us": 100000, 00:23:30.791 "enable": false 00:23:30.791 } 00:23:30.791 }, 00:23:30.791 { 00:23:30.791 "method": "bdev_wait_for_examine" 00:23:30.791 } 00:23:30.791 ] 00:23:30.791 }, 00:23:30.791 { 00:23:30.791 "subsystem": "nbd", 00:23:30.791 "config": [] 00:23:30.791 } 00:23:30.791 ] 00:23:30.791 }' 00:23:30.791 10:17:44 keyring_file -- keyring/file.sh@115 -- # killprocess 85322 00:23:30.791 10:17:44 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85322 ']' 00:23:30.791 10:17:44 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85322 00:23:30.791 10:17:44 keyring_file -- common/autotest_common.sh@959 -- # uname 00:23:30.791 10:17:44 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.791 10:17:44 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85322 00:23:30.791 killing process with pid 85322 00:23:30.791 Received shutdown signal, test time was about 1.000000 seconds 00:23:30.791 00:23:30.791 Latency(us) 00:23:30.791 [2024-11-19T10:17:44.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.791 [2024-11-19T10:17:44.680Z] =================================================================================================================== 00:23:30.791 [2024-11-19T10:17:44.680Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:30.791 10:17:44 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:30.791 10:17:44 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:30.791 10:17:44 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85322' 00:23:30.791 10:17:44 keyring_file -- common/autotest_common.sh@973 -- # kill 85322 00:23:30.791 10:17:44 keyring_file -- common/autotest_common.sh@978 -- # wait 85322 00:23:31.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:31.051 10:17:44 keyring_file -- keyring/file.sh@118 -- # bperfpid=85583 00:23:31.051 10:17:44 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85583 /var/tmp/bperf.sock 00:23:31.051 10:17:44 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85583 ']' 00:23:31.051 10:17:44 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:23:31.051 "subsystems": [ 00:23:31.051 { 00:23:31.051 "subsystem": "keyring", 00:23:31.051 "config": [ 00:23:31.051 { 00:23:31.051 "method": "keyring_file_add_key", 00:23:31.051 "params": { 00:23:31.051 "name": "key0", 00:23:31.051 "path": "/tmp/tmp.QgrPKEQfyh" 00:23:31.051 } 00:23:31.051 }, 00:23:31.051 { 00:23:31.051 "method": "keyring_file_add_key", 00:23:31.051 "params": { 00:23:31.051 "name": "key1", 00:23:31.051 "path": "/tmp/tmp.TX2wssVhbk" 00:23:31.051 } 00:23:31.051 } 00:23:31.051 ] 00:23:31.051 }, 00:23:31.051 { 00:23:31.051 "subsystem": "iobuf", 00:23:31.051 "config": [ 00:23:31.051 { 00:23:31.051 "method": "iobuf_set_options", 00:23:31.051 "params": { 00:23:31.051 "small_pool_count": 8192, 00:23:31.051 "large_pool_count": 1024, 00:23:31.051 "small_bufsize": 8192, 00:23:31.051 "large_bufsize": 135168, 00:23:31.051 "enable_numa": false 00:23:31.051 } 00:23:31.051 } 00:23:31.051 ] 00:23:31.051 }, 00:23:31.051 { 00:23:31.051 "subsystem": "sock", 00:23:31.051 "config": [ 00:23:31.051 { 00:23:31.051 "method": "sock_set_default_impl", 00:23:31.051 "params": { 00:23:31.051 "impl_name": "uring" 00:23:31.051 } 00:23:31.051 }, 00:23:31.051 { 00:23:31.051 "method": "sock_impl_set_options", 00:23:31.051 "params": { 00:23:31.051 "impl_name": "ssl", 00:23:31.051 "recv_buf_size": 4096, 00:23:31.051 "send_buf_size": 4096, 00:23:31.051 "enable_recv_pipe": true, 00:23:31.051 "enable_quickack": false, 00:23:31.051 "enable_placement_id": 0, 00:23:31.051 "enable_zerocopy_send_server": true, 00:23:31.051 "enable_zerocopy_send_client": false, 00:23:31.051 "zerocopy_threshold": 0, 00:23:31.051 "tls_version": 0, 00:23:31.051 "enable_ktls": false 00:23:31.051 } 00:23:31.051 }, 00:23:31.051 { 00:23:31.051 "method": "sock_impl_set_options", 00:23:31.051 "params": { 00:23:31.051 "impl_name": "posix", 00:23:31.051 "recv_buf_size": 2097152, 00:23:31.051 "send_buf_size": 2097152, 00:23:31.051 "enable_recv_pipe": true, 00:23:31.051 "enable_quickack": false, 00:23:31.051 "enable_placement_id": 0, 00:23:31.051 "enable_zerocopy_send_server": true, 00:23:31.051 "enable_zerocopy_send_client": false, 00:23:31.051 "zerocopy_threshold": 0, 00:23:31.051 "tls_version": 0, 00:23:31.051 "enable_ktls": false 00:23:31.051 } 00:23:31.051 }, 00:23:31.051 { 00:23:31.052 "method": "sock_impl_set_options", 00:23:31.052 "params": { 00:23:31.052 "impl_name": "uring", 00:23:31.052 "recv_buf_size": 2097152, 00:23:31.052 "send_buf_size": 2097152, 00:23:31.052 "enable_recv_pipe": true, 00:23:31.052 "enable_quickack": false, 00:23:31.052 "enable_placement_id": 0, 00:23:31.052 "enable_zerocopy_send_server": false, 00:23:31.052 "enable_zerocopy_send_client": false, 00:23:31.052 "zerocopy_threshold": 0, 00:23:31.052 "tls_version": 0, 00:23:31.052 "enable_ktls": false 00:23:31.052 } 00:23:31.052 } 00:23:31.052 ] 00:23:31.052 }, 00:23:31.052 { 00:23:31.052 "subsystem": "vmd", 00:23:31.052 "config": [] 00:23:31.052 }, 00:23:31.052 { 00:23:31.052 "subsystem": "accel", 00:23:31.052 "config": [ 00:23:31.052 { 00:23:31.052 "method": "accel_set_options", 00:23:31.052 "params": { 00:23:31.052 "small_cache_size": 128, 00:23:31.052 "large_cache_size": 16, 00:23:31.052 "task_count": 2048, 00:23:31.052 "sequence_count": 2048, 00:23:31.052 "buf_count": 2048 00:23:31.052 } 00:23:31.052 } 00:23:31.052 ] 00:23:31.052 }, 00:23:31.052 { 00:23:31.052 "subsystem": "bdev", 00:23:31.052 "config": [ 00:23:31.052 { 00:23:31.052 "method": "bdev_set_options", 00:23:31.052 "params": { 00:23:31.052 "bdev_io_pool_size": 65535, 00:23:31.052 "bdev_io_cache_size": 256, 00:23:31.052 "bdev_auto_examine": true, 00:23:31.052 "iobuf_small_cache_size": 128, 00:23:31.052 "iobuf_large_cache_size": 16 00:23:31.052 } 00:23:31.052 }, 00:23:31.052 { 00:23:31.052 "method": "bdev_raid_set_options", 00:23:31.052 "params": { 00:23:31.052 "process_window_size_kb": 1024, 00:23:31.052 "process_max_bandwidth_mb_sec": 0 00:23:31.052 } 00:23:31.052 }, 00:23:31.052 { 00:23:31.052 "method": "bdev_iscsi_set_options", 00:23:31.052 "params": { 00:23:31.052 "timeout_sec": 30 00:23:31.052 } 00:23:31.052 }, 00:23:31.052 { 00:23:31.052 "method": "bdev_nvme_set_options", 00:23:31.052 "params": { 00:23:31.052 "action_on_timeout": "none", 00:23:31.052 "timeout_us": 0, 00:23:31.052 "timeout_admin_us": 0, 00:23:31.052 "keep_alive_timeout_ms": 10000, 00:23:31.052 "arbitration_burst": 0, 00:23:31.052 "low_priority_weight": 0, 00:23:31.052 "medium_priority_weight": 0, 00:23:31.052 "high_priority_weight": 0, 00:23:31.052 "nvme_adminq_poll_period_us": 10000, 00:23:31.052 "nvme_ioq_poll_period_us": 0, 00:23:31.052 "io_queue_requests": 512, 00:23:31.052 "delay_cmd_submit": true, 00:23:31.052 "transport_retry_count": 4, 00:23:31.052 "bdev_retry_count": 3, 00:23:31.052 "transport_ack_timeout": 0, 00:23:31.052 "ctrlr_loss_timeout_sec": 0, 00:23:31.052 "reconnect_delay_sec": 0, 00:23:31.052 "fast_io_fail_timeout_sec": 0, 00:23:31.052 "disable_auto_failback": false, 00:23:31.052 "generate_uuids": false, 00:23:31.052 "transport_tos": 0, 00:23:31.052 "nvme_error_stat": false, 00:23:31.052 "rdma_srq_size": 0, 00:23:31.052 "io_path_stat": false, 00:23:31.052 "allow_accel_sequence": false, 00:23:31.052 "rdma_max_cq_size": 0, 00:23:31.052 "rdma_cm_event_timeout_ms": 0, 00:23:31.052 "dhchap_digests": [ 00:23:31.052 "sha256", 00:23:31.052 "sha384", 00:23:31.052 "sha512" 00:23:31.052 ], 00:23:31.052 "dhchap_dhgroups": [ 00:23:31.052 "null", 00:23:31.052 "ffdhe2048", 00:23:31.052 "ffdhe3072", 00:23:31.052 "ffdhe4096", 00:23:31.052 "ffdhe6144", 00:23:31.052 "ffdhe8192" 00:23:31.052 ] 00:23:31.052 } 00:23:31.052 }, 00:23:31.052 { 00:23:31.052 "method": "bdev_nvme_attach_controller", 00:23:31.052 "params": { 00:23:31.052 "name": "nvme0", 00:23:31.052 "trtype": "TCP", 00:23:31.052 "adrfam": "IPv4", 00:23:31.052 "traddr": "127.0.0.1", 00:23:31.052 "trsvcid": "4420", 00:23:31.052 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:31.052 "prchk_reftag": false, 00:23:31.052 "prchk_guard": false, 00:23:31.052 "ctrlr_loss_timeout_sec": 0, 00:23:31.052 "reconnect_delay_sec": 0, 00:23:31.052 "fast_io_fail_timeout_sec": 0, 00:23:31.052 "psk": "key0", 00:23:31.052 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:31.052 "hdgst": false, 00:23:31.052 "ddgst": false, 00:23:31.052 "multipath": "multipath" 00:23:31.052 } 00:23:31.052 }, 00:23:31.052 { 00:23:31.052 "method": "bdev_nvme_set_hotplug", 00:23:31.052 "params": { 00:23:31.052 "period_us": 100000, 00:23:31.052 "enable": false 00:23:31.052 } 00:23:31.052 }, 00:23:31.052 { 00:23:31.052 "method": "bdev_wait_for_examine" 00:23:31.052 } 00:23:31.052 ] 00:23:31.052 }, 00:23:31.052 { 00:23:31.052 "subsystem": "nbd", 00:23:31.052 "config": [] 00:23:31.052 } 00:23:31.052 ] 00:23:31.052 }' 00:23:31.052 10:17:44 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:31.052 10:17:44 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:23:31.052 10:17:44 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:31.052 10:17:44 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:31.052 10:17:44 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:31.052 10:17:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:31.052 [2024-11-19 10:17:44.906905] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:23:31.052 [2024-11-19 10:17:44.907247] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85583 ] 00:23:31.312 [2024-11-19 10:17:45.052287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.312 [2024-11-19 10:17:45.105874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.571 [2024-11-19 10:17:45.240646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:31.571 [2024-11-19 10:17:45.296162] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:31.571 10:17:45 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:31.571 10:17:45 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:23:31.571 10:17:45 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:23:31.571 10:17:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:31.571 10:17:45 keyring_file -- keyring/file.sh@121 -- # jq length 00:23:31.830 10:17:45 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:23:31.830 10:17:45 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:23:31.830 10:17:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:31.830 10:17:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:31.830 10:17:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:31.830 10:17:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:31.830 10:17:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:32.398 10:17:46 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:23:32.398 10:17:46 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:23:32.398 10:17:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:32.398 10:17:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:32.398 10:17:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:32.398 10:17:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:32.398 10:17:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:32.657 10:17:46 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:23:32.657 10:17:46 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:23:32.657 10:17:46 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:23:32.657 10:17:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:23:32.915 10:17:46 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:23:32.915 10:17:46 keyring_file -- keyring/file.sh@1 -- # cleanup 00:23:32.915 10:17:46 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.QgrPKEQfyh /tmp/tmp.TX2wssVhbk 00:23:32.915 10:17:46 keyring_file -- keyring/file.sh@20 -- # killprocess 85583 00:23:32.916 10:17:46 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85583 ']' 00:23:32.916 10:17:46 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85583 00:23:32.916 10:17:46 keyring_file -- common/autotest_common.sh@959 -- # uname 00:23:32.916 10:17:46 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.916 10:17:46 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85583 00:23:32.916 killing process with pid 85583 00:23:32.916 Received shutdown signal, test time was about 1.000000 seconds 00:23:32.916 00:23:32.916 Latency(us) 00:23:32.916 [2024-11-19T10:17:46.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.916 [2024-11-19T10:17:46.805Z] =================================================================================================================== 00:23:32.916 [2024-11-19T10:17:46.805Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:32.916 10:17:46 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:32.916 10:17:46 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:32.916 10:17:46 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85583' 00:23:32.916 10:17:46 keyring_file -- common/autotest_common.sh@973 -- # kill 85583 00:23:32.916 10:17:46 keyring_file -- common/autotest_common.sh@978 -- # wait 85583 00:23:33.174 10:17:46 keyring_file -- keyring/file.sh@21 -- # killprocess 85312 00:23:33.174 10:17:46 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85312 ']' 00:23:33.174 10:17:46 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85312 00:23:33.174 10:17:46 keyring_file -- common/autotest_common.sh@959 -- # uname 00:23:33.174 10:17:46 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.174 10:17:46 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85312 00:23:33.174 killing process with pid 85312 00:23:33.174 10:17:46 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:33.174 10:17:46 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:33.174 10:17:46 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85312' 00:23:33.174 10:17:46 keyring_file -- common/autotest_common.sh@973 -- # kill 85312 00:23:33.174 10:17:46 keyring_file -- common/autotest_common.sh@978 -- # wait 85312 00:23:33.434 ************************************ 00:23:33.434 END TEST keyring_file 00:23:33.434 ************************************ 00:23:33.434 00:23:33.434 real 0m16.418s 00:23:33.434 user 0m41.881s 00:23:33.434 sys 0m3.125s 00:23:33.434 10:17:47 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:33.434 10:17:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:33.693 10:17:47 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:23:33.693 10:17:47 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:23:33.693 10:17:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:33.693 10:17:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:33.693 10:17:47 -- common/autotest_common.sh@10 -- # set +x 00:23:33.693 ************************************ 00:23:33.693 START TEST keyring_linux 00:23:33.693 ************************************ 00:23:33.693 10:17:47 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:23:33.693 Joined session keyring: 363807044 00:23:33.693 * Looking for test storage... 00:23:33.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:23:33.693 10:17:47 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:33.693 10:17:47 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:23:33.693 10:17:47 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:33.693 10:17:47 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:33.693 10:17:47 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:33.693 10:17:47 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:33.693 10:17:47 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:33.693 10:17:47 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.693 10:17:47 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:23:33.693 10:17:47 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:23:33.693 10:17:47 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:23:33.693 10:17:47 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:23:33.693 10:17:47 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:23:33.693 10:17:47 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:23:33.693 10:17:47 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:33.693 10:17:47 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:23:33.693 10:17:47 keyring_linux -- scripts/common.sh@345 -- # : 1 00:23:33.693 10:17:47 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:33.693 10:17:47 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.693 10:17:47 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:23:33.693 10:17:47 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:23:33.693 10:17:47 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.693 10:17:47 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:23:33.693 10:17:47 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:23:33.693 10:17:47 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:23:33.694 10:17:47 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:23:33.694 10:17:47 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:33.694 10:17:47 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:23:33.694 10:17:47 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:23:33.694 10:17:47 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:33.694 10:17:47 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:33.694 10:17:47 keyring_linux -- scripts/common.sh@368 -- # return 0 00:23:33.694 10:17:47 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:33.694 10:17:47 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:33.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.694 --rc genhtml_branch_coverage=1 00:23:33.694 --rc genhtml_function_coverage=1 00:23:33.694 --rc genhtml_legend=1 00:23:33.694 --rc geninfo_all_blocks=1 00:23:33.694 --rc geninfo_unexecuted_blocks=1 00:23:33.694 00:23:33.694 ' 00:23:33.694 10:17:47 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:33.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.694 --rc genhtml_branch_coverage=1 00:23:33.694 --rc genhtml_function_coverage=1 00:23:33.694 --rc genhtml_legend=1 00:23:33.694 --rc geninfo_all_blocks=1 00:23:33.694 --rc geninfo_unexecuted_blocks=1 00:23:33.694 00:23:33.694 ' 00:23:33.694 10:17:47 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:33.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.694 --rc genhtml_branch_coverage=1 00:23:33.694 --rc genhtml_function_coverage=1 00:23:33.694 --rc genhtml_legend=1 00:23:33.694 --rc geninfo_all_blocks=1 00:23:33.694 --rc geninfo_unexecuted_blocks=1 00:23:33.694 00:23:33.694 ' 00:23:33.694 10:17:47 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:33.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.694 --rc genhtml_branch_coverage=1 00:23:33.694 --rc genhtml_function_coverage=1 00:23:33.694 --rc genhtml_legend=1 00:23:33.694 --rc geninfo_all_blocks=1 00:23:33.694 --rc geninfo_unexecuted_blocks=1 00:23:33.694 00:23:33.694 ' 00:23:33.694 10:17:47 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:23:33.694 10:17:47 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6147973c-080a-4377-b1e7-85172bdc559a 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=6147973c-080a-4377-b1e7-85172bdc559a 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:33.694 10:17:47 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:23:33.694 10:17:47 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.694 10:17:47 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.694 10:17:47 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.694 10:17:47 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.694 10:17:47 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.694 10:17:47 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.694 10:17:47 keyring_linux -- paths/export.sh@5 -- # export PATH 00:23:33.694 10:17:47 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:33.694 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:33.694 10:17:47 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:33.694 10:17:47 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:33.694 10:17:47 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:33.694 10:17:47 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:23:33.694 10:17:47 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:23:33.694 10:17:47 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:23:33.694 10:17:47 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:23:33.694 10:17:47 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:23:33.694 10:17:47 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:23:33.694 10:17:47 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:33.694 10:17:47 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:23:33.694 10:17:47 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:23:33.694 10:17:47 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:23:33.694 10:17:47 keyring_linux -- nvmf/common.sh@733 -- # python - 00:23:33.953 10:17:47 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:23:33.953 10:17:47 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:23:33.953 /tmp/:spdk-test:key0 00:23:33.953 10:17:47 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:23:33.953 10:17:47 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:23:33.953 10:17:47 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:23:33.953 10:17:47 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:33.953 10:17:47 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:23:33.953 10:17:47 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:23:33.953 10:17:47 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:33.953 10:17:47 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:33.953 10:17:47 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:23:33.953 10:17:47 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:33.953 10:17:47 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:23:33.953 10:17:47 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:23:33.953 10:17:47 keyring_linux -- nvmf/common.sh@733 -- # python - 00:23:33.953 10:17:47 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:23:33.953 10:17:47 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:23:33.953 /tmp/:spdk-test:key1 00:23:33.953 10:17:47 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85704 00:23:33.953 10:17:47 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:33.953 10:17:47 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85704 00:23:33.953 10:17:47 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85704 ']' 00:23:33.953 10:17:47 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.953 10:17:47 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.953 10:17:47 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.953 10:17:47 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.953 10:17:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:33.953 [2024-11-19 10:17:47.772338] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:23:33.953 [2024-11-19 10:17:47.772709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85704 ] 00:23:34.211 [2024-11-19 10:17:47.924655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.211 [2024-11-19 10:17:47.991243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.211 [2024-11-19 10:17:48.067831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:35.172 10:17:48 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.172 10:17:48 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:23:35.172 10:17:48 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:23:35.172 10:17:48 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.172 10:17:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:35.172 [2024-11-19 10:17:48.748735] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.172 null0 00:23:35.172 [2024-11-19 10:17:48.780706] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:35.172 [2024-11-19 10:17:48.781126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:35.172 10:17:48 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.172 10:17:48 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:23:35.172 814977838 00:23:35.172 10:17:48 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:23:35.172 491194150 00:23:35.172 10:17:48 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85722 00:23:35.172 10:17:48 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:23:35.172 10:17:48 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85722 /var/tmp/bperf.sock 00:23:35.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:35.172 10:17:48 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85722 ']' 00:23:35.172 10:17:48 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:35.172 10:17:48 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.172 10:17:48 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:35.172 10:17:48 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.172 10:17:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:35.172 [2024-11-19 10:17:48.867209] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:23:35.172 [2024-11-19 10:17:48.867571] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85722 ] 00:23:35.172 [2024-11-19 10:17:49.014570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.430 [2024-11-19 10:17:49.067867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.430 10:17:49 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.430 10:17:49 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:23:35.430 10:17:49 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:23:35.430 10:17:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:23:35.688 10:17:49 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:23:35.688 10:17:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:35.946 [2024-11-19 10:17:49.690089] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:35.946 10:17:49 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:23:35.946 10:17:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:23:36.204 [2024-11-19 10:17:50.017825] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.204 nvme0n1 00:23:36.463 10:17:50 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:23:36.463 10:17:50 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:23:36.463 10:17:50 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:23:36.463 10:17:50 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:23:36.463 10:17:50 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:23:36.463 10:17:50 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:36.722 10:17:50 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:23:36.722 10:17:50 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:23:36.722 10:17:50 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:23:36.722 10:17:50 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:23:36.722 10:17:50 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:36.722 10:17:50 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:23:36.722 10:17:50 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:36.981 10:17:50 keyring_linux -- keyring/linux.sh@25 -- # sn=814977838 00:23:36.981 10:17:50 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:23:36.981 10:17:50 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:23:36.981 10:17:50 keyring_linux -- keyring/linux.sh@26 -- # [[ 814977838 == \8\1\4\9\7\7\8\3\8 ]] 00:23:36.981 10:17:50 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 814977838 00:23:36.981 10:17:50 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:23:36.981 10:17:50 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:36.981 Running I/O for 1 seconds... 00:23:37.916 13707.00 IOPS, 53.54 MiB/s 00:23:37.916 Latency(us) 00:23:37.916 [2024-11-19T10:17:51.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.916 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:37.916 nvme0n1 : 1.01 13715.29 53.58 0.00 0.00 9285.85 6583.39 16205.27 00:23:37.916 [2024-11-19T10:17:51.805Z] =================================================================================================================== 00:23:37.916 [2024-11-19T10:17:51.805Z] Total : 13715.29 53.58 0.00 0.00 9285.85 6583.39 16205.27 00:23:37.916 { 00:23:37.916 "results": [ 00:23:37.916 { 00:23:37.916 "job": "nvme0n1", 00:23:37.916 "core_mask": "0x2", 00:23:37.916 "workload": "randread", 00:23:37.916 "status": "finished", 00:23:37.916 "queue_depth": 128, 00:23:37.916 "io_size": 4096, 00:23:37.916 "runtime": 1.008801, 00:23:37.916 "iops": 13715.29171759346, 00:23:37.916 "mibps": 53.57535827184945, 00:23:37.916 "io_failed": 0, 00:23:37.916 "io_timeout": 0, 00:23:37.916 "avg_latency_us": 9285.849131383216, 00:23:37.916 "min_latency_us": 6583.389090909091, 00:23:37.916 "max_latency_us": 16205.265454545455 00:23:37.916 } 00:23:37.916 ], 00:23:37.916 "core_count": 1 00:23:37.916 } 00:23:38.174 10:17:51 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:38.174 10:17:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:38.432 10:17:52 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:23:38.432 10:17:52 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:23:38.432 10:17:52 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:23:38.432 10:17:52 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:23:38.432 10:17:52 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:38.432 10:17:52 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:23:38.691 10:17:52 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:23:38.691 10:17:52 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:23:38.691 10:17:52 keyring_linux -- keyring/linux.sh@23 -- # return 00:23:38.691 10:17:52 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:38.691 10:17:52 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:23:38.691 10:17:52 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:38.691 10:17:52 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:23:38.691 10:17:52 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:38.691 10:17:52 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:23:38.691 10:17:52 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:38.691 10:17:52 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:38.691 10:17:52 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:38.950 [2024-11-19 10:17:52.645172] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:38.950 [2024-11-19 10:17:52.646127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f65d0 (107): Transport endpoint is not connected 00:23:38.950 [2024-11-19 10:17:52.647118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f65d0 (9): Bad file descriptor 00:23:38.950 [2024-11-19 10:17:52.648115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:23:38.950 [2024-11-19 10:17:52.648141] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:38.950 [2024-11-19 10:17:52.648153] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:23:38.950 [2024-11-19 10:17:52.648164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:23:38.950 request: 00:23:38.950 { 00:23:38.950 "name": "nvme0", 00:23:38.950 "trtype": "tcp", 00:23:38.950 "traddr": "127.0.0.1", 00:23:38.950 "adrfam": "ipv4", 00:23:38.950 "trsvcid": "4420", 00:23:38.950 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:38.950 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:38.950 "prchk_reftag": false, 00:23:38.950 "prchk_guard": false, 00:23:38.950 "hdgst": false, 00:23:38.950 "ddgst": false, 00:23:38.950 "psk": ":spdk-test:key1", 00:23:38.950 "allow_unrecognized_csi": false, 00:23:38.950 "method": "bdev_nvme_attach_controller", 00:23:38.950 "req_id": 1 00:23:38.950 } 00:23:38.950 Got JSON-RPC error response 00:23:38.950 response: 00:23:38.950 { 00:23:38.950 "code": -5, 00:23:38.950 "message": "Input/output error" 00:23:38.950 } 00:23:38.950 10:17:52 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:23:38.950 10:17:52 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:38.950 10:17:52 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:38.950 10:17:52 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:38.950 10:17:52 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:23:38.950 10:17:52 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:23:38.950 10:17:52 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:23:38.950 10:17:52 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:23:38.950 10:17:52 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:23:38.950 10:17:52 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:23:38.950 10:17:52 keyring_linux -- keyring/linux.sh@33 -- # sn=814977838 00:23:38.950 10:17:52 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 814977838 00:23:38.950 1 links removed 00:23:38.950 10:17:52 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:23:38.950 10:17:52 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:23:38.950 10:17:52 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:23:38.950 10:17:52 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:23:38.950 10:17:52 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:23:38.950 10:17:52 keyring_linux -- keyring/linux.sh@33 -- # sn=491194150 00:23:38.950 10:17:52 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 491194150 00:23:38.950 1 links removed 00:23:38.950 10:17:52 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85722 00:23:38.950 10:17:52 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85722 ']' 00:23:38.950 10:17:52 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85722 00:23:38.950 10:17:52 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:23:38.950 10:17:52 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:38.950 10:17:52 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85722 00:23:38.950 killing process with pid 85722 00:23:38.950 Received shutdown signal, test time was about 1.000000 seconds 00:23:38.950 00:23:38.950 Latency(us) 00:23:38.950 [2024-11-19T10:17:52.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.950 [2024-11-19T10:17:52.839Z] =================================================================================================================== 00:23:38.950 [2024-11-19T10:17:52.839Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:38.950 10:17:52 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:38.950 10:17:52 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:38.950 10:17:52 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85722' 00:23:38.950 10:17:52 keyring_linux -- common/autotest_common.sh@973 -- # kill 85722 00:23:38.951 10:17:52 keyring_linux -- common/autotest_common.sh@978 -- # wait 85722 00:23:39.207 10:17:52 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85704 00:23:39.207 10:17:52 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85704 ']' 00:23:39.207 10:17:52 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85704 00:23:39.207 10:17:52 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:23:39.207 10:17:52 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.207 10:17:52 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85704 00:23:39.207 killing process with pid 85704 00:23:39.207 10:17:52 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:39.207 10:17:52 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:39.208 10:17:52 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85704' 00:23:39.208 10:17:52 keyring_linux -- common/autotest_common.sh@973 -- # kill 85704 00:23:39.208 10:17:52 keyring_linux -- common/autotest_common.sh@978 -- # wait 85704 00:23:39.476 00:23:39.476 real 0m5.989s 00:23:39.476 user 0m11.417s 00:23:39.476 sys 0m1.602s 00:23:39.476 10:17:53 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:39.476 ************************************ 00:23:39.476 END TEST keyring_linux 00:23:39.476 ************************************ 00:23:39.476 10:17:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:39.748 10:17:53 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:23:39.748 10:17:53 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:23:39.748 10:17:53 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:39.748 10:17:53 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:23:39.748 10:17:53 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:39.748 10:17:53 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:23:39.748 10:17:53 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:23:39.748 10:17:53 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:23:39.748 10:17:53 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:23:39.748 10:17:53 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:23:39.748 10:17:53 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:23:39.748 10:17:53 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:23:39.748 10:17:53 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:23:39.748 10:17:53 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:23:39.748 10:17:53 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:23:39.748 10:17:53 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:23:39.748 10:17:53 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:23:39.748 10:17:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.748 10:17:53 -- common/autotest_common.sh@10 -- # set +x 00:23:39.748 10:17:53 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:23:39.748 10:17:53 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:23:39.748 10:17:53 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:23:39.748 10:17:53 -- common/autotest_common.sh@10 -- # set +x 00:23:41.651 INFO: APP EXITING 00:23:41.651 INFO: killing all VMs 00:23:41.651 INFO: killing vhost app 00:23:41.651 INFO: EXIT DONE 00:23:42.218 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:42.218 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:23:42.218 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:23:42.785 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:42.785 Cleaning 00:23:42.785 Removing: /var/run/dpdk/spdk0/config 00:23:42.785 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:42.785 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:42.785 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:42.785 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:42.785 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:42.785 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:42.785 Removing: /var/run/dpdk/spdk1/config 00:23:42.785 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:23:42.785 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:23:42.785 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:23:42.785 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:23:42.785 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:23:42.785 Removing: /var/run/dpdk/spdk1/hugepage_info 00:23:43.044 Removing: /var/run/dpdk/spdk2/config 00:23:43.044 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:23:43.044 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:23:43.044 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:23:43.044 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:23:43.044 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:23:43.044 Removing: /var/run/dpdk/spdk2/hugepage_info 00:23:43.044 Removing: /var/run/dpdk/spdk3/config 00:23:43.044 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:23:43.044 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:23:43.044 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:23:43.044 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:23:43.044 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:23:43.044 Removing: /var/run/dpdk/spdk3/hugepage_info 00:23:43.044 Removing: /var/run/dpdk/spdk4/config 00:23:43.044 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:23:43.044 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:23:43.044 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:23:43.044 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:23:43.044 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:23:43.044 Removing: /var/run/dpdk/spdk4/hugepage_info 00:23:43.044 Removing: /dev/shm/nvmf_trace.0 00:23:43.044 Removing: /dev/shm/spdk_tgt_trace.pid56695 00:23:43.044 Removing: /var/run/dpdk/spdk0 00:23:43.044 Removing: /var/run/dpdk/spdk1 00:23:43.044 Removing: /var/run/dpdk/spdk2 00:23:43.044 Removing: /var/run/dpdk/spdk3 00:23:43.044 Removing: /var/run/dpdk/spdk4 00:23:43.044 Removing: /var/run/dpdk/spdk_pid56542 00:23:43.044 Removing: /var/run/dpdk/spdk_pid56695 00:23:43.044 Removing: /var/run/dpdk/spdk_pid56894 00:23:43.044 Removing: /var/run/dpdk/spdk_pid56980 00:23:43.044 Removing: /var/run/dpdk/spdk_pid57000 00:23:43.044 Removing: /var/run/dpdk/spdk_pid57110 00:23:43.044 Removing: /var/run/dpdk/spdk_pid57128 00:23:43.044 Removing: /var/run/dpdk/spdk_pid57267 00:23:43.044 Removing: /var/run/dpdk/spdk_pid57463 00:23:43.044 Removing: /var/run/dpdk/spdk_pid57617 00:23:43.044 Removing: /var/run/dpdk/spdk_pid57689 00:23:43.044 Removing: /var/run/dpdk/spdk_pid57766 00:23:43.044 Removing: /var/run/dpdk/spdk_pid57863 00:23:43.044 Removing: /var/run/dpdk/spdk_pid57935 00:23:43.044 Removing: /var/run/dpdk/spdk_pid57972 00:23:43.044 Removing: /var/run/dpdk/spdk_pid58009 00:23:43.044 Removing: /var/run/dpdk/spdk_pid58073 00:23:43.044 Removing: /var/run/dpdk/spdk_pid58165 00:23:43.044 Removing: /var/run/dpdk/spdk_pid58598 00:23:43.044 Removing: /var/run/dpdk/spdk_pid58648 00:23:43.044 Removing: /var/run/dpdk/spdk_pid58694 00:23:43.044 Removing: /var/run/dpdk/spdk_pid58702 00:23:43.044 Removing: /var/run/dpdk/spdk_pid58769 00:23:43.044 Removing: /var/run/dpdk/spdk_pid58785 00:23:43.044 Removing: /var/run/dpdk/spdk_pid58852 00:23:43.044 Removing: /var/run/dpdk/spdk_pid58861 00:23:43.044 Removing: /var/run/dpdk/spdk_pid58906 00:23:43.044 Removing: /var/run/dpdk/spdk_pid58917 00:23:43.044 Removing: /var/run/dpdk/spdk_pid58962 00:23:43.044 Removing: /var/run/dpdk/spdk_pid58967 00:23:43.044 Removing: /var/run/dpdk/spdk_pid59103 00:23:43.044 Removing: /var/run/dpdk/spdk_pid59139 00:23:43.044 Removing: /var/run/dpdk/spdk_pid59221 00:23:43.044 Removing: /var/run/dpdk/spdk_pid59548 00:23:43.044 Removing: /var/run/dpdk/spdk_pid59560 00:23:43.044 Removing: /var/run/dpdk/spdk_pid59596 00:23:43.044 Removing: /var/run/dpdk/spdk_pid59610 00:23:43.044 Removing: /var/run/dpdk/spdk_pid59625 00:23:43.044 Removing: /var/run/dpdk/spdk_pid59650 00:23:43.044 Removing: /var/run/dpdk/spdk_pid59662 00:23:43.044 Removing: /var/run/dpdk/spdk_pid59679 00:23:43.045 Removing: /var/run/dpdk/spdk_pid59698 00:23:43.045 Removing: /var/run/dpdk/spdk_pid59711 00:23:43.045 Removing: /var/run/dpdk/spdk_pid59727 00:23:43.045 Removing: /var/run/dpdk/spdk_pid59746 00:23:43.045 Removing: /var/run/dpdk/spdk_pid59765 00:23:43.045 Removing: /var/run/dpdk/spdk_pid59780 00:23:43.045 Removing: /var/run/dpdk/spdk_pid59799 00:23:43.045 Removing: /var/run/dpdk/spdk_pid59813 00:23:43.045 Removing: /var/run/dpdk/spdk_pid59834 00:23:43.045 Removing: /var/run/dpdk/spdk_pid59853 00:23:43.045 Removing: /var/run/dpdk/spdk_pid59861 00:23:43.045 Removing: /var/run/dpdk/spdk_pid59882 00:23:43.045 Removing: /var/run/dpdk/spdk_pid59918 00:23:43.045 Removing: /var/run/dpdk/spdk_pid59926 00:23:43.045 Removing: /var/run/dpdk/spdk_pid59961 00:23:43.045 Removing: /var/run/dpdk/spdk_pid60033 00:23:43.045 Removing: /var/run/dpdk/spdk_pid60056 00:23:43.045 Removing: /var/run/dpdk/spdk_pid60071 00:23:43.045 Removing: /var/run/dpdk/spdk_pid60099 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60109 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60122 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60161 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60180 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60203 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60218 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60222 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60237 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60242 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60260 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60265 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60279 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60312 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60334 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60349 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60372 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60387 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60395 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60435 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60451 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60475 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60488 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60490 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60503 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60511 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60518 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60526 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60533 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60615 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60663 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60781 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60814 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60854 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60874 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60896 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60910 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60942 00:23:43.303 Removing: /var/run/dpdk/spdk_pid60963 00:23:43.303 Removing: /var/run/dpdk/spdk_pid61043 00:23:43.303 Removing: /var/run/dpdk/spdk_pid61060 00:23:43.303 Removing: /var/run/dpdk/spdk_pid61104 00:23:43.303 Removing: /var/run/dpdk/spdk_pid61185 00:23:43.303 Removing: /var/run/dpdk/spdk_pid61241 00:23:43.303 Removing: /var/run/dpdk/spdk_pid61272 00:23:43.304 Removing: /var/run/dpdk/spdk_pid61372 00:23:43.304 Removing: /var/run/dpdk/spdk_pid61414 00:23:43.304 Removing: /var/run/dpdk/spdk_pid61452 00:23:43.304 Removing: /var/run/dpdk/spdk_pid61681 00:23:43.304 Removing: /var/run/dpdk/spdk_pid61776 00:23:43.304 Removing: /var/run/dpdk/spdk_pid61810 00:23:43.304 Removing: /var/run/dpdk/spdk_pid61834 00:23:43.304 Removing: /var/run/dpdk/spdk_pid61871 00:23:43.304 Removing: /var/run/dpdk/spdk_pid61908 00:23:43.304 Removing: /var/run/dpdk/spdk_pid61940 00:23:43.304 Removing: /var/run/dpdk/spdk_pid61972 00:23:43.304 Removing: /var/run/dpdk/spdk_pid62372 00:23:43.304 Removing: /var/run/dpdk/spdk_pid62416 00:23:43.304 Removing: /var/run/dpdk/spdk_pid62757 00:23:43.304 Removing: /var/run/dpdk/spdk_pid63229 00:23:43.304 Removing: /var/run/dpdk/spdk_pid63511 00:23:43.304 Removing: /var/run/dpdk/spdk_pid64369 00:23:43.304 Removing: /var/run/dpdk/spdk_pid65296 00:23:43.304 Removing: /var/run/dpdk/spdk_pid65419 00:23:43.304 Removing: /var/run/dpdk/spdk_pid65481 00:23:43.304 Removing: /var/run/dpdk/spdk_pid66890 00:23:43.304 Removing: /var/run/dpdk/spdk_pid67208 00:23:43.304 Removing: /var/run/dpdk/spdk_pid71022 00:23:43.304 Removing: /var/run/dpdk/spdk_pid71376 00:23:43.304 Removing: /var/run/dpdk/spdk_pid71485 00:23:43.304 Removing: /var/run/dpdk/spdk_pid71618 00:23:43.304 Removing: /var/run/dpdk/spdk_pid71646 00:23:43.304 Removing: /var/run/dpdk/spdk_pid71667 00:23:43.304 Removing: /var/run/dpdk/spdk_pid71694 00:23:43.304 Removing: /var/run/dpdk/spdk_pid71784 00:23:43.304 Removing: /var/run/dpdk/spdk_pid71915 00:23:43.304 Removing: /var/run/dpdk/spdk_pid72064 00:23:43.304 Removing: /var/run/dpdk/spdk_pid72145 00:23:43.304 Removing: /var/run/dpdk/spdk_pid72330 00:23:43.304 Removing: /var/run/dpdk/spdk_pid72400 00:23:43.304 Removing: /var/run/dpdk/spdk_pid72485 00:23:43.304 Removing: /var/run/dpdk/spdk_pid72843 00:23:43.304 Removing: /var/run/dpdk/spdk_pid73262 00:23:43.304 Removing: /var/run/dpdk/spdk_pid73263 00:23:43.304 Removing: /var/run/dpdk/spdk_pid73264 00:23:43.304 Removing: /var/run/dpdk/spdk_pid73532 00:23:43.304 Removing: /var/run/dpdk/spdk_pid73791 00:23:43.563 Removing: /var/run/dpdk/spdk_pid74184 00:23:43.563 Removing: /var/run/dpdk/spdk_pid74186 00:23:43.563 Removing: /var/run/dpdk/spdk_pid74508 00:23:43.563 Removing: /var/run/dpdk/spdk_pid74528 00:23:43.563 Removing: /var/run/dpdk/spdk_pid74546 00:23:43.563 Removing: /var/run/dpdk/spdk_pid74578 00:23:43.563 Removing: /var/run/dpdk/spdk_pid74585 00:23:43.563 Removing: /var/run/dpdk/spdk_pid74930 00:23:43.563 Removing: /var/run/dpdk/spdk_pid74979 00:23:43.563 Removing: /var/run/dpdk/spdk_pid75304 00:23:43.563 Removing: /var/run/dpdk/spdk_pid75508 00:23:43.563 Removing: /var/run/dpdk/spdk_pid75933 00:23:43.563 Removing: /var/run/dpdk/spdk_pid76484 00:23:43.563 Removing: /var/run/dpdk/spdk_pid77380 00:23:43.563 Removing: /var/run/dpdk/spdk_pid78008 00:23:43.563 Removing: /var/run/dpdk/spdk_pid78016 00:23:43.563 Removing: /var/run/dpdk/spdk_pid80061 00:23:43.563 Removing: /var/run/dpdk/spdk_pid80113 00:23:43.563 Removing: /var/run/dpdk/spdk_pid80166 00:23:43.563 Removing: /var/run/dpdk/spdk_pid80223 00:23:43.563 Removing: /var/run/dpdk/spdk_pid80333 00:23:43.563 Removing: /var/run/dpdk/spdk_pid80380 00:23:43.563 Removing: /var/run/dpdk/spdk_pid80433 00:23:43.563 Removing: /var/run/dpdk/spdk_pid80494 00:23:43.563 Removing: /var/run/dpdk/spdk_pid80859 00:23:43.563 Removing: /var/run/dpdk/spdk_pid82070 00:23:43.563 Removing: /var/run/dpdk/spdk_pid82209 00:23:43.563 Removing: /var/run/dpdk/spdk_pid82456 00:23:43.563 Removing: /var/run/dpdk/spdk_pid83053 00:23:43.563 Removing: /var/run/dpdk/spdk_pid83214 00:23:43.563 Removing: /var/run/dpdk/spdk_pid83371 00:23:43.563 Removing: /var/run/dpdk/spdk_pid83468 00:23:43.563 Removing: /var/run/dpdk/spdk_pid83635 00:23:43.563 Removing: /var/run/dpdk/spdk_pid83744 00:23:43.563 Removing: /var/run/dpdk/spdk_pid84442 00:23:43.563 Removing: /var/run/dpdk/spdk_pid84483 00:23:43.563 Removing: /var/run/dpdk/spdk_pid84520 00:23:43.563 Removing: /var/run/dpdk/spdk_pid84774 00:23:43.563 Removing: /var/run/dpdk/spdk_pid84807 00:23:43.563 Removing: /var/run/dpdk/spdk_pid84842 00:23:43.563 Removing: /var/run/dpdk/spdk_pid85312 00:23:43.563 Removing: /var/run/dpdk/spdk_pid85322 00:23:43.563 Removing: /var/run/dpdk/spdk_pid85583 00:23:43.563 Removing: /var/run/dpdk/spdk_pid85704 00:23:43.563 Removing: /var/run/dpdk/spdk_pid85722 00:23:43.563 Clean 00:23:43.563 10:17:57 -- common/autotest_common.sh@1453 -- # return 0 00:23:43.563 10:17:57 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:23:43.563 10:17:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:43.563 10:17:57 -- common/autotest_common.sh@10 -- # set +x 00:23:43.563 10:17:57 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:23:43.563 10:17:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:43.563 10:17:57 -- common/autotest_common.sh@10 -- # set +x 00:23:43.821 10:17:57 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:43.821 10:17:57 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:23:43.821 10:17:57 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:23:43.821 10:17:57 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:23:43.821 10:17:57 -- spdk/autotest.sh@398 -- # hostname 00:23:43.821 10:17:57 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:23:43.821 geninfo: WARNING: invalid characters removed from testname! 00:24:10.380 10:18:23 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:13.664 10:18:27 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:16.948 10:18:30 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:19.482 10:18:32 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:22.016 10:18:35 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:24.547 10:18:38 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:27.879 10:18:41 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:27.879 10:18:41 -- spdk/autorun.sh@1 -- $ timing_finish 00:24:27.879 10:18:41 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:24:27.879 10:18:41 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:27.879 10:18:41 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:24:27.879 10:18:41 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:27.879 + [[ -n 5211 ]] 00:24:27.879 + sudo kill 5211 00:24:27.888 [Pipeline] } 00:24:27.904 [Pipeline] // timeout 00:24:27.909 [Pipeline] } 00:24:27.919 [Pipeline] // stage 00:24:27.923 [Pipeline] } 00:24:27.938 [Pipeline] // catchError 00:24:27.947 [Pipeline] stage 00:24:27.949 [Pipeline] { (Stop VM) 00:24:27.962 [Pipeline] sh 00:24:28.241 + vagrant halt 00:24:32.427 ==> default: Halting domain... 00:24:37.713 [Pipeline] sh 00:24:37.995 + vagrant destroy -f 00:24:42.188 ==> default: Removing domain... 00:24:42.201 [Pipeline] sh 00:24:42.491 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:24:42.501 [Pipeline] } 00:24:42.516 [Pipeline] // stage 00:24:42.521 [Pipeline] } 00:24:42.536 [Pipeline] // dir 00:24:42.541 [Pipeline] } 00:24:42.555 [Pipeline] // wrap 00:24:42.562 [Pipeline] } 00:24:42.575 [Pipeline] // catchError 00:24:42.585 [Pipeline] stage 00:24:42.587 [Pipeline] { (Epilogue) 00:24:42.601 [Pipeline] sh 00:24:42.884 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:51.020 [Pipeline] catchError 00:24:51.022 [Pipeline] { 00:24:51.035 [Pipeline] sh 00:24:51.316 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:51.316 Artifacts sizes are good 00:24:51.325 [Pipeline] } 00:24:51.340 [Pipeline] // catchError 00:24:51.352 [Pipeline] archiveArtifacts 00:24:51.359 Archiving artifacts 00:24:51.468 [Pipeline] cleanWs 00:24:51.480 [WS-CLEANUP] Deleting project workspace... 00:24:51.480 [WS-CLEANUP] Deferred wipeout is used... 00:24:51.486 [WS-CLEANUP] done 00:24:51.488 [Pipeline] } 00:24:51.504 [Pipeline] // stage 00:24:51.509 [Pipeline] } 00:24:51.524 [Pipeline] // node 00:24:51.530 [Pipeline] End of Pipeline 00:24:51.569 Finished: SUCCESS